Updates from: 06/02/2024 03:08:16
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/access-tokens.md
An *access token* contains claims that you can use in Azure Active Directory B2C
This article shows you how to request an access token for a web application and web API. For more information about tokens in Azure AD B2C, see the [overview of tokens in Azure Active Directory B2C](tokens-overview.md). > [!NOTE]
-> **Web API chains (On-Behalf-Of) is not supported by Azure AD B2C** - Many architectures include a web API that needs to call another downstream web API, both secured by Azure AD B2C. This scenario is common in clients that have a web API back end, which in turn calls a another service. This chained web API scenario can be supported by using the OAuth 2.0 JWT Bearer Credential grant, otherwise known as the On-Behalf-Of flow. However, the On-Behalf-Of flow is not currently implemented in Azure AD B2C. Although On-Behalf-Of works for applications registered in Microsoft Entra ID, it does not work for applications registered in Azure AD B2C, regardless of the tenant (Microsoft Entra ID or Azure AD B2C) that is issuing the tokens.
+> **Web API chains (On-Behalf-Of) is not supported by Azure AD B2C** - Many architectures include a web API that needs to call another downstream web API, both secured by Azure AD B2C. This scenario is common in clients that have a web API back end, which in turn calls another service. This chained web API scenario can be supported by using the OAuth 2.0 JWT Bearer Credential grant, otherwise known as the On-Behalf-Of flow. However, the On-Behalf-Of flow is not currently implemented in Azure AD B2C. Although On-Behalf-Of works for applications registered in Microsoft Entra ID, it does not work for applications registered in Azure AD B2C, regardless of the tenant (Microsoft Entra ID or Azure AD B2C) that is issuing the tokens.
## Prerequisites
active-directory-b2c Add Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-captcha.md
Previously updated : 03/01/2024 Last updated : 05/03/2024
For the various page layouts, use the following page layout versions:
|Page layout |Page layout version range | |||
-| Selfasserted | >=2.1.29 |
-| Unifiedssp | >=2.1.17 |
-| Multifactor | >=1.2.15 |
+| Selfasserted | >=2.1.30 |
+| Unifiedssp | >=2.1.18 |
+| Multifactor | >=1.2.16 |
**Example:**
Use the steps in [Test the custom policy](tutorial-create-user-flows.md?pivots=b
## Next steps - Learn how to [Define a CAPTCHA technical profile](captcha-technical-profile.md).-- Learn how to [Configure CAPTCHA display control](display-control-captcha.md).
+- Learn how to [Configure CAPTCHA display control](display-control-captcha.md).
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
In summary, you'll use Azure Lighthouse to allow a user or group in your Azure A
- An Azure AD B2C account with [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant. -- A Microsoft Entra account with the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Microsoft Entra subscription. See how to [Assign a user as an administrator of an Azure subscription](../role-based-access-control/role-assignments-portal-subscription-admin.md).
+- A Microsoft Entra account with the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Microsoft Entra subscription. See how to [Assign a user as an administrator of an Azure subscription](../role-based-access-control/role-assignments-portal-subscription-admin.yml).
## 1. Create or choose resource group
Use the following instructions to create a new Azure Alert, which will send an [
- Alert logic: Set **Number of results** **Greater than** **0**. - Evaluation based on: Select **120** for Period (in minutes) and **5** for Frequency (in minutes)
- ![Create a alert rule condition](./media/azure-monitor/alert-create-rule-condition.png)
+ ![Create an alert rule condition](./media/azure-monitor/alert-create-rule-condition.png)
After the alert is created, go to **Log Analytics workspace** and select **Alerts**. This page displays all the alerts that have been triggered in the duration set by **Time range** option.
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
When the access token expires or the app session is invalidated, Azure Static We
- A premium Azure subscription. - If you haven't created an app yet, follow the guidance how to create an [Azure Static Web App](../static-web-apps/overview.md). - Familiarize yourself with the Azure Static Web App [staticwebapp.config.json](../static-web-apps/configuration.md) configuration file.-- Familiarize yourself with the Azure Static Web App [App Settings](../static-web-apps/application-settings.md).
+- Familiarize yourself with the Azure Static Web App [App Settings](../static-web-apps/application-settings.yml).
## Step 1: Configure your user flow
To register your application, follow these steps:
## Step 3: Configure the Azure Static App
-Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Static Web App's [application settings](../static-web-apps/application-settings.md). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [Configure application settings for Azure Static Web Apps](../static-web-apps/application-settings.md#configure-application-settings) article.
+Once the application is registered with Azure AD B2C, create the following application secrets in the Azure Static Web App's [application settings](../static-web-apps/application-settings.yml). You can configure application settings via the Azure portal or with the Azure CLI. For more information, check out the [Configure application settings for Azure Static Web Apps](../static-web-apps/application-settings.yml#configure-application-settings) article.
Add the following keys to the app settings:
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
To create the web app registration, use the following steps:
1. Under **Name**, enter a name for the application (for example, *webapp1*). 1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**. 1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `https://localhost:44316/signin-oidc`.
-1. Under **Authentication**, go to **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox.
+1. Under **Manage**, select the **Authentication**, go to **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox.
1. Under **Permissions**, select the **Grant admin consent to openid and offline access permissions** checkbox. 1. Select **Register**. 1. Select **Overview**.
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
To create a CNAME record for your custom domain:
1. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name Server Management**. 1. Create a new TXT DNS record and complete the fields as shown below:
- 1. Name: `_dnsauth.contoso.com`, but you need to enter just `_dnsauth`.
+ 1. Name: `_dnsauth.login.contoso.com`, but you need to enter just `_dnsauth`.
1. Type: `TXT` 1. Value: Something like `75abc123t48y2qrtsz2bvk......`.
active-directory-b2c Custom Policies Series Sign Up Or Sign In Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md
Notice the claims transformations we defined in [step 3.2](#step-32define-cla
Just like in sign-in with a local account, you need to configure the [Microsoft Entra Technical Profiles](active-directory-technical-profile.md), which you use to connect to Microsoft Entra ID storage, to store or read a user social account.
-1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserRead` technical profile and then add a new technical profile by using the following code:
+1. In the `ContosoCustomPolicy.XML` file, locate the `AAD-UserRead` technical profile and then add a new technical profile below it by using the following code:
```xml <TechnicalProfile Id="AAD-UserWriteUsingAlternativeSecurityId">
Use the following steps to add a combined local and social account:
```xml <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="localIdpAuthentication" AlwaysUseDefaultValue="true" /> ```
+ Make sure you also add the `authenticationSource` claim in the output claims collection of the `UserSignInCollector` self-asserted technical profile.
1. In the `UserJourneys` section, add a new user journey, `LocalAndSocialSignInAndSignUp` by using the following code:
active-directory-b2c Custom Policies Series Sign Up Or Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md
In this article, you learn how to write an Azure Active Directory B2C (Azure AD
Azure AD B2C uses OpenID Connect authentication protocol to verify user credentials. In Azure AD B2C, you send the user credentials alongside other information to a secure endpoint, which then determines if the credentials are valid or not. In a nutshell, when you use Azure AD B2C's implementation of OpenID Connect, you can outsource sign-up, sign in, and other identity management experiences in your web applications to Microsoft Entra ID.
-Azure AD B2C custom policy provides a OpenID Connect technical profile, which you use to make a call to a secure Microsoft endpoint. Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
+Azure AD B2C custom policy provides an OpenID Connect technical profile, which you use to make a call to a secure Microsoft endpoint. Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
## Prerequisites
You can sign in by entering the **Email Address** and **Password** of an existin
- Learn how to [Remove the sign-up link](add-sign-in-policy.md), so users can just sign in. -- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
+- Learn more about [OpenID Connect technical profile](openid-connect-technical-profile.md).
active-directory-b2c Custom Policies Series Store User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md
Previously updated : 01/11/2024 Last updated : 05/11/2024
We use the `ClaimGenerator` technical profile to execute three claims transforma
</Precondition> </Preconditions> </ValidationTechnicalProfile>
- <ValidationTechnicalProfile ReferenceId="DisplayNameClaimGenerator"/>
+ <ValidationTechnicalProfile ReferenceId="UserInputDisplayNameGenerator"/>
<ValidationTechnicalProfile ReferenceId="AAD-UserWrite"/> </ValidationTechnicalProfiles> <!--</TechnicalProfile>-->
To configure a display control, use the following steps:
1. Use the procedure in [step 6](#step-6upload-policy) and [step 7](#step-7test-policy) to upload your policy file, and test it. This time, you must verify your email address before a user account is created.
-<a name='update-user-account-by-using-azure-ad-technical-profile'></a>
## Update user account by using Microsoft Entra ID technical profile
-You can configure a Microsoft Entra ID technical profile to update a user account instead of attempting to create a new one. To do so, set the Microsoft Entra ID technical profile to throw an error if the specified user account doesn't already exist in the `Metadata` collection by using the following code. The *Operation* needs to be set to *Write*:
+You can configure a Microsoft Entra ID technical profile to update a user account instead of attempting to create a new one. To do so, set the Microsoft Entra ID technical profile to throw an error if the specified user account doesn't already exist in the metadata collection by using the following code. Also, remove the `Key="UserMessageIfClaimsPrincipalAlreadyExists` metadata entry. The *Operation* needs to be set to *Write*:
```xml <Item Key="Operation">Write</Item>
- <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">true</Item>
+ <Item Key="RaiseErrorIfClaimsPrincipalDoesNotExist">false</Item>
``` ## Use custom attributes
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
- Support requests for public preview features can be submitted through regular support channels. ## User flows- |Feature |User flow |Custom policy |Notes | ||::|::|| | [Sign-up and sign-in](add-sign-up-and-sign-in-policy.md) with email and password. | GA | GA| | | [Sign-up and sign-in](add-sign-up-and-sign-in-policy.md) with username and password.| GA | GA | | | [Profile editing flow](add-profile-editing-policy.md) | GA | GA | |
-| [Self-Service password reset](add-password-reset-policy.md) | GA| GA| |
-| [Force password reset](force-password-reset.md) | GA | NA | |
-| [Phone sign-up and sign-in](phone-authentication-user-flows.md) | GA | GA | |
-| [Conditional Access and Identity Protection](conditional-access-user-flow.md) | GA | GA | Not available for SAML applications |
+| [Self-Service password reset](add-password-reset-policy.md) | GA| GA| Available in China cloud, but only for custom policies. |
+| [Force password reset](force-password-reset.md) | GA | GA | Available in China cloud, but only for custom policies. |
+| [Phone sign-up and sign-in](phone-authentication-user-flows.md) | GA | GA | Available in China cloud, but only for custom policies. |
| [Smart lockout](threat-management.md) | GA | GA | |
+| [Conditional Access and Identity Protection](conditional-access-user-flow.md) | GA | GA | Not available for SAML applications. Limited CA features are available in China cloud. Identity Protection is not available in China cloud. |
| [CAPTCHA](add-captcha.md) | Preview | Preview | You can enable it during sign-up or sign-in for Local accounts. | ## OAuth 2.0 application authorization flows
The following table summarizes the Security Assertion Markup Language (SAML) app
|Feature |User flow |Custom policy |Notes | ||::|::||
-| [Multi-language support](localization.md)| GA | GA | |
-| [Custom domains](custom-domain.md)| GA | GA | |
+| [Multi-language support](localization.md)| GA | GA | Available in China cloud, but only for custom policies. |
+| [Custom domains](custom-domain.md)| GA | GA | Available in China cloud, but only for custom policies. |
| [Custom email verification](custom-email-mailjet.md) | NA | GA| | | [Customize the user interface with built-in templates](customize-ui.md) | GA| GA| | | [Customize the user interface with custom templates](customize-ui-with-html.md) | GA| GA| By using HTML templates. |
-| [Page layout version](page-layout.md) | GA | GA | |
-| [JavaScript](javascript-and-page-layout.md) | GA | GA | |
+| [Page layout version](page-layout.md) | GA | GA | Available in China cloud, but only for custom policies. |
+| [JavaScript](javascript-and-page-layout.md) | GA | GA | Available in China cloud, but only for custom policies. |
| [Embedded sign-in experience](embedded-login.md) | NA | Preview| By using the inline frame element `<iframe>`. |
-| [Password complexity](password-complexity.md) | GA | GA | |
+| [Password complexity](password-complexity.md) | GA | GA | Available in China cloud, but only for custom policies. |
| [Disable email verification](disable-email-verification.md) | GA| GA| Not recommended for production environments. Disabling email verification in the sign-up process may lead to spam. |
The following table summarizes the Security Assertion Markup Language (SAML) app
||::|::|| |[AD FS](identity-provider-adfs.md) | NA | GA | | |[Amazon](identity-provider-amazon.md) | GA | GA | |
-|[Apple](identity-provider-apple-id.md) | GA | GA | |
+|[Apple](identity-provider-apple-id.md) | GA | GA | Available in China cloud, but only for custom policies. |
|[Microsoft Entra ID (Single-tenant)](identity-provider-azure-ad-single-tenant.md) | GA | GA | | |[Microsoft Entra ID (multitenant)](identity-provider-azure-ad-multi-tenant.md) | NA | GA | | |[Azure AD B2C](identity-provider-azure-ad-b2c.md) | GA | GA | |
The following table summarizes the Security Assertion Markup Language (SAML) app
|[Salesforce](identity-provider-salesforce.md) | GA | GA | | |[Salesforce (SAML protocol)](identity-provider-salesforce-saml.md) | NA | GA | | |[Twitter](identity-provider-twitter.md) | GA | GA | |
-|[WeChat](identity-provider-wechat.md) | Preview | GA | |
+|[WeChat](identity-provider-wechat.md) | Preview | GA | Available in China cloud, but only for custom policies. |
|[Weibo](identity-provider-weibo.md) | Preview | GA | | ## Generic identity providers
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Custom policy | Notes | | - | :--: | -- |
-| [Default SSO session provider](custom-policy-reference-sso.md#defaultssosessionprovider) | GA | |
-| [External login session provider](custom-policy-reference-sso.md#externalloginssosessionprovider) | GA | |
-| [SAML SSO session provider](custom-policy-reference-sso.md#samlssosessionprovider) | GA | |
-| [OAuth SSO Session Provider](custom-policy-reference-sso.md#oauthssosessionprovider) | GA| |
+| [Default SSO session provider](custom-policy-reference-sso.md#defaultssosessionprovider) | GA | Available in China cloud, but only for custom policies. |
+| [External login session provider](custom-policy-reference-sso.md#externalloginssosessionprovider) | GA | Available in China cloud, but only for custom policies. |
+| [SAML SSO session provider](custom-policy-reference-sso.md#samlssosessionprovider) | GA | Available in China cloud, but only for custom policies. |
+| [OAuth SSO Session Provider](custom-policy-reference-sso.md#oauthssosessionprovider) | GA| Available in China cloud, but only for custom policies. |
### Components
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Custom policy | Notes | | - | :--: | -- | | [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | GA | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).|
-| [Phone factor authentication](phone-factor-technical-profile.md) | GA | |
+| [Phone factor authentication](phone-factor-technical-profile.md) | GA | Available in China cloud, but only for custom policies. |
| [Microsoft Entra multifactor authentication authentication](multi-factor-auth-technical-profile.md) | GA | | | [One-time password](one-time-password-technical-profile.md) | GA | | | [Microsoft Entra ID](active-directory-technical-profile.md) as local directory | GA | |
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md
To host your HTML content in Blob storage, use the following steps:
To create a public container in Blob storage, perform the following steps:
+1. Under **Settings** in the leftmost menu, select **Configuration**.
+1. Enable **Allow Blob anonymous access**.
+1. Select **Save**.
1. Under **Data storage** in the left-hand menu, select **Containers**. 1. Select **+ Container**. 1. For **Name**, enter *root*. The name can be a name of your choosing, for example *contoso*, but we use *root* in this example for simplicity.
Configure Blob storage for Cross-Origin Resource Sharing by performing the follo
Validate that you're ready by performing the following steps: 1. Repeat the configure CORS step. For **Allowed origins**, enter `https://www.test-cors.org`
-1. Navigate to [www.test-cors.org](https://www.test-cors.org/)
+1. Navigate to [www.test-cors.org](https://cors-test.codehappy.dev/)
1. For the **Remote URL** box, paste the URL of your HTML file. For example, `https://your-account.blob.core.windows.net/root/azure-ad-b2c/unified.html` 1. Select **Send Request**. The result should be `XHR status: 200`.
active-directory-b2c Enable Authentication In Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api.md
# Enable authentication in your own Node.js web API by using Azure Active Directory B2C
-In this article, you learn how to create your web app that calls your web API. The web API needs to be protected by Azure Active Directory B2C (Azure AD B2C). To authorize access to a the web API, you serve requests that include a valid access token that's issued by Azure AD B2C.
+In this article, you learn how to create your web app that calls your web API. The web API needs to be protected by Azure Active Directory B2C (Azure AD B2C). To authorize access to the web API, you serve requests that include a valid access token that's issued by Azure AD B2C.
## Prerequisites
active-directory-b2c Enable Authentication React Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app-options.md
# Configure authentication options in a React application by using Azure Active Directory B2C
-This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your React single-page application (SPA). Before you start, familiarize yourself with the article [Configure authentication in an React SPA](configure-authentication-sample-react-spa-app.md) or [Enable authentication in your own React SPA](enable-authentication-react-spa-app.md).
+This article describes ways you can customize and enhance the Azure Active Directory B2C (Azure AD B2C) authentication experience for your React single-page application (SPA). Before you start, familiarize yourself with the article [Configure authentication in a React SPA](configure-authentication-sample-react-spa-app.md) or [Enable authentication in your own React SPA](enable-authentication-react-spa-app.md).
## Sign-in and sign-out behavior
export const msalConfig = {
## Next steps - Learn more: [MSAL.js configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react/docs).-
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C99013` | The supplied grant_type [{0}] and token_type [{1}] combination is not supported. | | `AADB2C99015` | Profile '{0}' in policy '{1}' in tenant '{2}' is missing all InputClaims required for resource owner password credential flow. | [Create a resource owner policy](add-ropc-policy.md#create-a-resource-owner-policy) | |`AADB2C99002`| User doesn't exist. Please sign up before you can sign in. |
-| `AADB2C99027` | Policy '{0}' does not contain a AuthorizationTechnicalProfile with a corresponding ClientAssertionType. | [Client credentials flow](client-credentials-grant-flow.md) |
+| `AADB2C99027` | Policy '{0}' does not contain an AuthorizationTechnicalProfile with a corresponding ClientAssertionType. | [Client credentials flow](client-credentials-grant-flow.md) |
|`AADB2C90229`|Azure AD B2C throttled traffic if too many requests are sent from the same source in a short period of time| [Best practices for Azure Active Directory B2C](best-practices.md#testing) |
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
zone_pivot_groups: b2c-policy-type
## Create a LinkedIn application
-To enable sign-in for users with a LinkedIn account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [LinkedIn Developers website](https://developer.linkedin.com/). For more information, see [Authorization Code Flow](/linkedin/shared/authentication/authorization-code-flow). If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
+To enable sign-in for users with a LinkedIn account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in [LinkedIn Developers website](https://developer.linkedin.com/). If you don't already have a LinkedIn account, you can sign up at [https://www.linkedin.com/](https://www.linkedin.com/).
1. Sign in to the [LinkedIn Developers website](https://developer.linkedin.com/) with your LinkedIn account credentials. 1. Select **My Apps**, and then click **Create app**.
To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
1. Agree to the LinkedIn **API Terms of Use** and click **Create app**. 1. Select the **Auth** tab. Under **Authentication Keys**, copy the values for **Client ID** and **Client Secret**. You'll need both of them to configure LinkedIn as an identity provider in your tenant. **Client Secret** is an important security credential. 1. Select the edit pencil next to **Authorized redirect URLs for your app**, and then select **Add redirect URL**. Enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C. Select **Update**.
-1. By default, your LinkedIn app isn't approved for scopes related to sign in. To request a review, select the **Products** tab, and then select **Sign In with LinkedIn**. When the review is complete, the required scopes will be added to your application.
+1. By default, your LinkedIn app isn't approved for scopes related to sign in. To request a review, select the **Products** tab, and then select **Sign In with LinkedIn using OpenID Connect**. When the review is complete, the required scopes will be added to your application.
> [!NOTE] > You can view the scopes that are currently allowed for your app on the **Auth** tab in the **OAuth 2.0 scopes** section.
To enable sign-in for users with a LinkedIn account in Azure Active Directory B2
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**.
-1. Select **Identity providers**, then select **LinkedIn**.
-1. Enter a **Name**. For example, *LinkedIn*.
+1. Select **Identity providers**, then select **New OpenID Connect provider**.
+1. Enter a **Name**. For example, *LinkedIn-OIDC*.
+1. For the **Metadata URL**, enter **https://www.linkedin.com/oauth/.well-known/openid-configuration**.
1. For the **Client ID**, enter the Client ID of the LinkedIn application that you created earlier. 1. For the **Client secret**, enter the Client Secret that you recorded.
+1. For the **Scope**, enter **openid profile email**.
+1. For the **Response type**, enter **code**.
+1. For the **User ID**, enter **email**.
+1. For the **Display name**, enter **name**.
+1. For the **Given name**, enter **given_name**.
+1. For the **Surname**, enter **family_name**.
+1. For the **Email**, enter **email**.
1. Select **Save**. ## Add LinkedIn identity provider to a user flow
At this point, the LinkedIn identity provider has been set up, but it's not yet
1. In your Azure AD B2C tenant, select **User flows**. 1. Click the user flow that you want to add the LinkedIn identity provider.
-1. Under the **Social identity providers**, select **LinkedIn**.
+1. Under the **Custom identity providers**, select **LinkedIn-OIDC**.
1. Select **Save**. 1. To test your policy, select **Run user flow**. 1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run user flow** button.
-1. From the sign-up or sign-in page, select **LinkedIn** to sign in with LinkedIn account.
+1. From the sign-up or sign-in page, select **LinkedIn-OIDC** to sign in with LinkedIn account.
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
You need to store the client secret that you previously recorded in your Azure A
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
+1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. On the Overview page, select **Identity Experience Framework**. 1. Select **Policy keys** and then select **Add**.
You need to store the client secret that you previously recorded in your Azure A
## Configure LinkedIn as an identity provider
-To enable users to sign in using an LinkedIn account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using a LinkedIn account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
Define a LinkedIn account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy.
Define a LinkedIn account as a claims provider by adding it to the **ClaimsProvi
```xml <ClaimsProvider> <Domain>linkedin.com</Domain>
- <DisplayName>LinkedIn</DisplayName>
+ <DisplayName>LinkedIn-OIDC</DisplayName>
<TechnicalProfiles>
- <TechnicalProfile Id="LinkedIn-OAuth2">
+ <TechnicalProfile Id="LinkedIn-OIDC">
<DisplayName>LinkedIn</DisplayName>
- <Protocol Name="OAuth2" />
+ <Protocol Name="OpenIdConnect" />
<Metadata>
- <Item Key="ProviderName">linkedin</Item>
- <Item Key="authorization_endpoint">https://www.linkedin.com/oauth/v2/authorization</Item>
- <Item Key="AccessTokenEndpoint">https://www.linkedin.com/oauth/v2/accessToken</Item>
- <Item Key="ClaimsEndpoint">https://api.linkedin.com/v2/me</Item>
- <Item Key="scope">r_emailaddress r_liteprofile</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="external_user_identity_claim_id">id</Item>
- <Item Key="BearerTokenTransmissionMethod">AuthorizationHeader</Item>
- <Item Key="ResolveJsonPathsInJsonTokens">true</Item>
- <Item Key="UsePolicyInRedirectUri">false</Item>
- <Item Key="client_id">Your LinkedIn application client ID</Item>
+ <Item Key="METADATA">https://www.linkedin.com/oauth/.well-known/openid-configuration</Item>
+ <Item Key="scope">openid profile email</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="client_id">Your LinkedIn application client ID</Item>
</Metadata> <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_LinkedInSecret" />
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_LinkedInSecret" />
</CryptographicKeys> <InputClaims /> <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="id" />
- <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="firstName.localized" />
- <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="lastName.localized" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="linkedin.com" AlwaysUseDefaultValue="true" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="givenName" PartnerClaimType="given_name" />
+ <OutputClaim ClaimTypeReferenceId="surname" PartnerClaimType="family_name" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" DefaultValue="linkedin.com" AlwaysUseDefaultValue="true" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
</OutputClaims> <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="ExtractGivenNameFromLinkedInResponse" />
- <OutputClaimsTransformation ReferenceId="ExtractSurNameFromLinkedInResponse" />
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ <OutputClaimsTransformation ReferenceId="CreateSubjectClaimFromAlternativeSecurityId" />
</OutputClaimsTransformations> <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
+ </TechnicalProfile>
</TechnicalProfiles> </ClaimsProvider> ```
Define a LinkedIn account as a claims provider by adding it to the **ClaimsProvi
1. Replace the value of **client_id** with the client ID of the LinkedIn application that you previously recorded. 1. Save the file.
-### Add the claims transformations
-
-The LinkedIn technical profile requires the **ExtractGivenNameFromLinkedInResponse** and **ExtractSurNameFromLinkedInResponse** claims transformations to be added to the list of ClaimsTransformations. If you don't have a **ClaimsTransformations** element defined in your file, add the parent XML elements as shown below. The claims transformations also need a new claim type defined named **nullStringClaim**.
-
-Add the **BuildingBlocks** element near the top of the *TrustFrameworkExtensions.xml* file. See *TrustFrameworkBase.xml* for an example.
-
-```xml
-<BuildingBlocks>
- <ClaimsSchema>
- <!-- Claim type needed for LinkedIn claims transformations -->
- <ClaimType Id="nullStringClaim">
- <DisplayName>nullClaim</DisplayName>
- <DataType>string</DataType>
- <AdminHelpText>A policy claim to store output values from ClaimsTransformations that aren't useful. This claim should not be used in TechnicalProfiles.</AdminHelpText>
- <UserHelpText>A policy claim to store output values from ClaimsTransformations that aren't useful. This claim should not be used in TechnicalProfiles.</UserHelpText>
- </ClaimType>
- </ClaimsSchema>
-
- <ClaimsTransformations>
- <!-- Claim transformations needed for LinkedIn technical profile -->
- <ClaimsTransformation Id="ExtractGivenNameFromLinkedInResponse" TransformationMethod="GetSingleItemFromJson">
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="givenName" TransformationClaimType="inputJson" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="nullStringClaim" TransformationClaimType="key" />
- <OutputClaim ClaimTypeReferenceId="givenName" TransformationClaimType="value" />
- </OutputClaims>
- </ClaimsTransformation>
- <ClaimsTransformation Id="ExtractSurNameFromLinkedInResponse" TransformationMethod="GetSingleItemFromJson">
- <InputClaims>
- <InputClaim ClaimTypeReferenceId="surname" TransformationClaimType="inputJson" />
- </InputClaims>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="nullStringClaim" TransformationClaimType="key" />
- <OutputClaim ClaimTypeReferenceId="surname" TransformationClaimType="value" />
- </OutputClaims>
- </ClaimsTransformation>
- </ClaimsTransformations>
-</BuildingBlocks>
-```
- [!INCLUDE [active-directory-b2c-add-identity-provider-to-user-journey](../../includes/active-directory-b2c-add-identity-provider-to-user-journey.md)]
Add the **BuildingBlocks** element near the top of the *TrustFrameworkExtensions
<OrchestrationStep Order="2" Type="ClaimsExchange"> ... <ClaimsExchanges>
- <ClaimsExchange Id="LinkedInExchange" TechnicalProfileReferenceId="LinkedIn-OAuth2" />
+ <ClaimsExchange Id="LinkedInExchange" TechnicalProfileReferenceId="LinkedIn-OIDC" />
</ClaimsExchanges> </OrchestrationStep> ```
Add the **BuildingBlocks** element near the top of the *TrustFrameworkExtensions
1. Select your relying party policy, for example `B2C_1A_signup_signin`. 1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button.
-1. From the sign-up or sign-in page, select **LinkedIn** to sign in with LinkedIn account.
+1. From the sign-up or sign-in page, select **LinkedIn-OIDC** to sign in with LinkedIn account.
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
As part of the LinkedIn migration from v1.0 to v2.0, an additional call to anoth
</OrchestrationStep> ```
-Obtaining the email address from LinkedIn during sign-up is optional. If you choose not to obtain the email from LinkedIn but require one during sign up, the user is required to manually enter the email address and validate it.
+Obtaining the email address from LinkedIn during sign-up is optional. If you choose not to obtain the email from LinkedIn but require one during sign-up, the user is required to manually enter the email address and validate it.
For a full sample of a policy that uses the LinkedIn identity provider, see the [Custom Policy Starter Pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/linkedin-identity-provider).
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
The `RunAsync` method in the _Program.cs_ file:
1. Initializes the auth provider using [OAuth 2.0 client credentials grant](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) flow. With the client credentials grant flow, the app is able to get an access token to call the Microsoft Graph API. 1. Sets up the Microsoft Graph service client with the auth provider:
+The previously published sample code is not available at this time.
+<!--:::code language="csharp" source="~/ms-identity-dotnetcore-b2c-account-management/src/Program.cs" id="ms_docref_set_auth_provider":::-->
The initialized _GraphServiceClient_ is then used in _UserService.cs_ to perform the user management operations. For example, getting a list of the user accounts in the tenant:
+The previously published sample code is not available at this time.
+<!--:::code language="csharp" source="~/ms-identity-dotnetcore-b2c-account-management/src/Services/UserService.cs" id="ms_docref_get_list_of_user_accounts":::-->
[Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters.
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
Previously updated : 01/11/2024 Last updated : 04/16/2024
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
## Self-asserted page (selfasserted)
-**2.1.29**
--- Add CAPTCHA -
+**2.1.30**
+- Removed Change Email for readonly scenarios (i.e. Change Phone Number). You will no longer be able to change the email if you are trying to change your phone number, it will now be read only.
+- Implementation of Captcha Control
+
**2.1.26**- - Replaced `Keypress` to `Key Down` event and avoid `Asterisk` for nonrequired in classic mode. **2.1.25**- - Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version. **2.1.24**- - Fixed accessibility bugs.- - Fixed MFA related issue and IE11 compatibility issues. **2.1.23**- - Fixed accessibility bugs.- - Reduced `min-width` value for UI viewport for default template. **2.1.22**- - Fixed accessibility bugs.- - Added logic to adopt QR Code Image generated from backend library. **2.1.21**- - More sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow). **2.1.20**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Make checkbox as group - Enforce Validation Error Update on control change and enable continue on email verified - Add more field to error code to validation failure response
-
**2.1.16** - Fixed "Claims for verification control haven't been verified" bug while verifying code.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Fixed WCAG 2.1 accessibility bug for the TOTP multifactor authentication screens. **2.1.10**- - Correcting to the tab index - Fixed WCAG 2.1 accessibility and screen reader issues **2.1.9**- - TOTP multifactor authentication support. Adding links that allows users to download and install the Microsoft authenticator app to complete the enrollment of the TOTP on the authenticator. **2.1.8**- - The claim name is added to the `class` attribute of the `<li>` HTML element that surrounding the user's attribute input elements. The class name allows you to create a CSS selector to select the parent `<li>` for a certain user attribute input element. The following HTML markup shows the class attribute for the sign-up page: ```html
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Fixed the localization encoding issue for languages such as Spanish and French. **2.1.1**- - Added a UXString `heading` in addition to `intro` to display on the page as a title. This message is hidden by default. - Added support for saving passwords to iCloud Keychain. - Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray).
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Focus is now placed on the 'change' button after the email verification code is verified. **2.1.0**- - Localization and accessibility fixes. **2.0.0**- - Added support for [display controls](display-controls.md) in custom policies. **1.2.0**- - The username/email and password fields now use the `form` HTML element to allow Microsoft Edge and Internet Explorer (IE) to properly save this information. - Added a configurable user input validation delay for improved user experience. - Accessibility fixes
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for company branding in user flow pages. **1.1.0**- - Removed cancel alert - CSS class for error elements - Show/hide error logic improved - Default CSS removed **1.0.0**- - Initial release ## Unified sign-in and sign-up page with password reset link (unifiedssp)
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
> [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select.
+**2.1.18**
+- Implementation of Captcha Control
+
**2.1.17**--- Add CAPTCHA.
+- Include Aria-required for UnifiedSSP (Accessibility).
**2.1.14**- - Replaced `Keypress` to `Key Down` event. **2.1.13**- - Fixed content security policy (CSP) violation and remove more request header X-Aspnetmvc-Version **2.1.12**- - Removed `ReplaceAll` function for IE11 compatibility. **2.1.11**- - Fixed accessibility bugs. **2.1.10**- - Added additional sanitization of script tags to avoid XSS attacks. This revision breaks any script tags in the `<body>`. You should add script tags to the `<head>` tag. For more information, see [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md?pivots=b2c-user-flow). **2.1.9**- - Fixed accessibility bugs.- - Accessibility changes related to High Contrast button display and anchor focus improvements **2.1.8** - Add descriptive error message and fixed forgotPassword link! **2.1.7**- - Accessibility fix - correcting to the tab index **2.1.6**- - Accessibility fix - set the focus on the input field for verification. - Updates to the UI elements and CSS classes
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Removed UXStrings that are no longer used. **2.1.0**- - Added support for multiple sign-up links. - Added support for user input validation according to the predicate rules defined in the policy. - When the [sign-in option](sign-in-options.md) is set to Email, the sign-in header presents "Sign in with your sign-in name". The username field presents "Sign in name". For more information, see [localization](localization-string-ids.md#sign-up-or-sign-in-page-elements). **1.2.0**- - The username/email and password fields now use the `form` HTML element to allow Microsoft Edge and Internet Explorer (IE) to properly save this information. - Accessibility fixes - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for tenant branding in user flow pages. **1.1.0**- - Added keep me signed in (KMSI) control **1.0.0**- - Initial release ## MFA page (multifactor)
-**1.2.15**
--- Add CAPTCHA to MFA page.
+**1.2.16**
+- Fixes enter key for 'Phone only' mode.
+- Implementation to Captcha Control
**1.2.12**- - Replaced `KeyPress` to `KeyDown` event. **1.2.11**- - Removed `ReplaceAll` function for IE11 compatibility. **1.2.10**- - Fixed accessibility bugs. **1.2.9**--- Fix `Enter` event trigger on MFA.-
+- Fixes `Enter` event trigger on MFA.
- CSS changes render page text/control in vertical manner for small screens--- Fix Multifactor tab navigation bug.
+- Fixes Multifactor tab navigation bug.
**1.2.8**- - Passed the response status for MFA verification with error for backend to further triage. **1.2.7**- - Fixed accessibility issue on label for retries code.- - Fixed issue caused by incompatibility of default parameter on IE 11.- - Set up `H1` heading and enable by default.- - Updated HandlebarJS version to 4.7.7. **1.2.6**- - Corrected the `autocomplete` value on verification code field from false to off.- - Fixed a few XSS encoding issues. **1.2.5**
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for using policy or the QueryString parameter `pageFlavor` to select the layout (classic, oceanBlue, or slateGray). **1.2.1**- - Accessibility fixes on default templates **1.2.0**- - Accessibility fixes - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for tenant branding in user flow pages. **1.1.0**- - 'Confirm Code' button removed - The input field for the code now only takes input up to six (6) characters - The page will automatically attempt to verify the code entered when a six-digit code is entered, without any button having to be clicked
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Default CSS removed **1.0.0**- - Initial release ## Exception Page (globalexception) **1.2.5**--- Removed `ReplaceAl`l function for IE11 compatibility.
+- Removed `ReplaceAll` function for IE11 compatibility.
**1.2.4**- - Fixed accessibility bugs. **1.2.3**- - Updated HandlebarJS version to 4.7.7. **1.2.2**- - Set up `H1` heading and enable by default. **1.2.1**- - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6. **1.2.0**- - Accessibility fixes - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Support for Chrome translates **1.1.0**- - Accessibility fix - Removed the default message when there's no contact from the policy - Default CSS removed **1.0.0**- - Initial release ## Other pages (ProviderSelection, ClaimsConsent, UnifiedSSD) **1.2.4**- - Remove `ReplaceAll` function for IE11 compatibility. **1.2.3**- - Fixed accessibility bugs. **1.2.2**- - Updated HandlebarJS version to 4.7.7 **1.2.1**- - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6. **1.2.0**- - Accessibility fixes - You can now add the `data-preload="true"` attribute [in your HTML tags](customize-ui-with-html.md#guidelines-for-using-custom-page-content) to control the load order for CSS and JavaScript. - Load linked CSS files at the same time as your HTML template so it doesn't 'flicker' between loading the files.
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Support for Chrome translates **1.0.0**- - Initial release ## Next steps
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
Microsoft partners with the following ISVs for identity verification and proofin
| ISV partner | Description and integration walkthroughs | |:-|:--| | ![Screenshot of a deduce logo.](./medi) is an identity verification and proofing provider focused on stopping account takeover and registration fraud. It helps combat identity fraud and creates a trusted user experience. |
-| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
+| ![Screenshot of an eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
| ![Screenshot of an Experian logo.](./medi) is an identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. | | ![Screenshot of an IDology logo.](./medi) is an identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.| | ![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. | | ![Screenshot of a LexisNexis logo.](./medi) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. |
-| ![Screenshot of a Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
+| ![Screenshot of an Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
## MFA and Passwordless authentication
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a asignio logo](./medi) is a passwordless, soft biometric, and MFA solution. Asignio uses a combination of the patented Asignio Signature and live facial verification for user authentication. The changeable biometric signature eliminates passwords, fraud, phishing, and credential reuse through omni-channel authentication. |
+| ![Screenshot of an asignio logo](./medi) is a passwordless, soft biometric, and MFA solution. Asignio uses a combination of the patented Asignio Signature and live facial verification for user authentication. The changeable biometric signature eliminates passwords, fraud, phishing, and credential reuse through omni-channel authentication. |
| ![Screenshot of a bloksec logo](./medi) is a passwordless authentication and tokenless MFA solution, which provides real-time consent-based services and protects customers against identity-centric cyber-attacks such as password stuffing, phishing, and man-in-the-middle attacks. | | ![Screenshot of a grit biometric authentication logo.](./medi) provides users the option to sign in using finger print, face ID or [Windows Hello](https://support.microsoft.com/windows/learn-about-windows-hello-and-set-it-up-dae28983-8242-bb2a-d3d1-87c9d265a5f0) for enhanced security. | ![Screenshot of a haventec logo](./medi) is a passwordless authentication provider, which provides decentralized identity platform that eliminates passwords, shared secrets, and friction. | | ![Screenshot of a hypr logo](./medi) is a passwordless authentication provider, which replaces passwords with public key encryptions eliminating fraud, phishing, and credential reuse. |
-| ![Screenshot of a idemia logo](./medi) is a passwordless authentication provider, which provides real-time consent-based services with biometric authentication like faceID and fingerprinting eliminating fraud and credential reuse. |
-| ![Screenshot of a itsme logo](./medi) is an Electronic Identification, Authentication and Trust Services (eiDAS) compliant digital ID solution to allow users to sign in securely without card readers, passwords, two-factor authentication, and multiple PIN codes. |
+| ![Screenshot of an idemia logo](./medi) is a passwordless authentication provider, which provides real-time consent-based services with biometric authentication like faceID and fingerprinting eliminating fraud and credential reuse. |
+| ![Screenshot of an itsme logo](./medi) is an Electronic Identification, Authentication and Trust Services (eiDAS) compliant digital ID solution to allow users to sign in securely without card readers, passwords, two-factor authentication, and multiple PIN codes. |
|![Screenshot of a Keyless logo.](./medi) is a passwordless authentication provider that provides authentication in the form of a facial biometric scan and eliminates fraud, phishing, and credential reuse. | ![Screenshot of a nevis logo](./medi) enables passwordless authentication and provides a mobile-first, fully branded end-user experience with Nevis Access app for strong customer authentication and to comply with PSD2 transaction requirements. | | ![Screenshot of a nok nok logo](./medi) provides passwordless authentication and enables FIDO certified multifactor authentication such as FIDO UAF, FIDO U2F, WebAuthn, and FIDO2 for mobile and web applications. Using Nok Nok customers can improve their security posture while balancing user experience.
Microsoft partners with the following ISVs for MFA and Passwordless authenticati
| ![Screenshot of a twilio logo.](./medi) provides multiple solutions to enable MFA through SMS one-time password (OTP), time-based one-time password (TOTP), and push notifications, and to comply with SCA requirements for PSD2. | | ![Screenshot of a typingDNA logo](./medi) enables strong customer authentication by analyzing a userΓÇÖs typing pattern. It helps companies enable a silent MFA and comply with SCA requirements for PSD2. | | ![Screenshot of a whoiam logo](./medi) is a Branded Identity Management System (BRIMS) application that enables organizations to verify their user base by voice, SMS, and email. |
-| ![Screenshot of a xid logo](./medi) is a digital ID solution that provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified personal information through the xID API. |
+| ![Screenshot of an xid logo](./medi) is a digital ID solution that provides users with passwordless, secure, multifactor authentication. xID-authenticated users obtain their identities verified by a My Number Card, the digital ID card issued by the Japanese government. Organizations can get users verified personal information through the xID API. |
## Role-based access control
Microsoft partners with the following ISVs for fraud detection and prevention.
| ISV partner | Description and integration walkthroughs | |:-|:--|
-| ![Screenshot of a Arkose lab logo](./medi) is a fraud prevention solution provider that helps organizations protect against bot attacks, account takeover attacks, and fraudulent account openings. |
+| ![Screenshot of an Arkose lab logo](./medi) is a fraud prevention solution provider that helps organizations protect against bot attacks, account takeover attacks, and fraudulent account openings. |
| ![Screenshot of a BioCatch logo](./medi) is a fraud prevention solution provider that analyzes a user's physical and cognitive digital behaviors to generate insights that distinguish between legitimate customers and cyber-criminals. | | ![Screenshot of a Microsoft Dynamics 365 logo](./medi) is a solution that helps organizations protect against fraudulent account openings through device fingerprinting. |
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
The Jumio integration includes the following components:
The following architecture diagram shows the implementation.
- ![Diagram of the architecture of a Azure AD B2C integration with Jumio](./media/partner-jumio/jumio-architecture-diagram.png)
+ ![Diagram of the architecture of an Azure AD B2C integration with Jumio](./media/partner-jumio/jumio-architecture-diagram.png)
1. The user signs in, or signs up, and creates an account. Azure AD B2C collects user attributes. 2. Azure AD B2C calls the middle-layer API and passes the user attributes.
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
Learn to integrate Azure Active Directory B2C (Azure AD B2C) with the Saviynt Security Manager platform, which has visibility, security, and governance. Saviynt incorporates application risk and governance, infrastructure management, privileged account management, and customer risk analysis.
-Learn more: [Saviynt for Azure AD B2C](https://saviynt.com/integrations/old-version-azure-ad/for-b2c/)
+Learn more: [Saviynt for Azure AD B2C](https://saviynt.com/fr/integrations/entra-id/for-b2c)
Use the following instructions to set up access control delegated administration for Azure AD B2C users. Saviynt determines if a user is authorized to manage Azure AD B2C users with:
The Saviynt integration includes the following components:
* **Azure AD B2C** ΓÇô identity as a service for custom control of customer sign-up, sign-in, and profile management * See, [Azure AD B2C, Get started](https://azure.microsoft.com/services/active-directory/external-identities/b2c/) * **Saviynt for Azure AD B2C** ΓÇô identity governance for delegated administration of user life-cycle management and access governance
- * See, [Saviynt for Azure AD B2C](https://saviynt.com/integrations/old-version-azure-ad/for-b2c/)
+ * See, [Saviynt for Azure AD B2C](https://saviynt.com/fr/integrations/entra-id/for-b2c)
* **Microsoft Graph API** ΓÇô interface for Saviynt to manage Azure AD B2C users and their access * See, [Use the Microsoft Graph API](/graph/use-the-api)
active-directory-b2c Partner Transmit Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-transmit-security.md
+
+ Title: Tutorial to configure Azure Active Directory B2C with Transmit Security
+
+description: Learn how to configure Azure Active Directory B2C with Transmit Security for risk detect.
++++ Last updated : 05/13/2024+++
+zone_pivot_groups: b2c-policy-type
+
+# Customer intent: As a developer integrating Transmit Security with Azure AD B2C for risk detect. I want to configure a custom poicy with Transmit Security and set it up in Azure AD B2C, so I can detect and remidiate risks by using multi-factor authentication.
+++
+# Configure Transmit Security with Azure Active Directory B2C for risk detection and prevention
+
+In this tutorial, learn to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security's Detection and Response Services (DRS)](https://transmitsecurity.com/platform/detection-and-response). Transmit Security allows you to detect risk in customer interactions on digital channels, and to enable informed identity and trust decisions across the consumer experience.
+++++
+## Scenario description
+
+A Transmit Detection and Response integration includes the following components:
+
+- **Azure AD B2C tenant**: Authenticates the user and hosts a script that collects device information as users execute a target policy. It blocks or challenges sign-in/up attempts based on the risk recommendation returned by Transmit.
+- **Custom UI templates**: Customizes HTML content of the pages rendered by Azure AD B2C. These pages include the JavaScript snippets required for Transmit risk detection.
+- **Transmit data collection service**: Dynamically embedded script that logs device information, which is used to continuously assess risk during user interactions.
+- **Transmit DRS API endpoint**: Provides the risk recommendation based on collected data. Azure AD B2C communicates with this endpoint using a REST API connector.
+- **Azure Functions**: Your hosted API endpoint that is used to obtain a recommendation from the Transmit DRS API endpoint via the API connector.
+
+The following architecture diagram illustrates the implementation described in the guide:
+
+[ ![Diagram of the Transmit and Azure AD B2C architecture.](./media/partner-transmit-security/transmit-security-integration-diagram.png) ](./media/partner-transmit-security/transmit-security-integration-diagram.png#lightbox)
+
+1. The user signs-in with Azure AD B2C.
+2. A custom page initializes the Transmit SDK, which starts streaming device information to Transmit.
+3. Azure AD B2C reports a sign-in action event to Transmit in order to obtain an action token.
+4. Transmit returns an action token, and Azure AD B2C proceeds with the user sign-up or sign-in.
+5. After the user signs-in, Azure AD B2C requests a risk recommendation from Transmit via the Azure Function.
+6. The Azure Function sends Transmit the recommendation request with the action token.
+7. Transmit returns a recommendation (challenge/allow/deny) based on the collected device information.
+8. The Azure Function passes the recommendation result to Azure AD B2C to handle accordingly.
+9. Azure AD B2C performs more steps if needed, like multifactor authentication and completes the sign-up or sign-in flow.
+
+## Prerequisites
+
+* A Microsoft Entra subscription. If you don't have one, get a [free account](https://azure.microsoft.com/free/)
+* [An Azure AD B2C tenant](./tutorial-create-tenant.md) linked to the Entra subscription
+* [A registered web application](./tutorial-register-applications.md) in your Azure AD B2C tenant
+* [Azure AD B2C custom policies](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+* A Transmit Security tenant. Go to [transmitsecurity.com](https://transmitsecurity.com/)
+
+## Step 1: Create a Transmit app
+
+Sign in to the [Transmit Admin Portal](https://portal.transmitsecurity.io/) and [create an application](https://developer.transmitsecurity.com/guides/user/create_new_application/):
+
+1. From **Applications**, select **Add application**.
+1. Configure the application with the following attributes:
+
+ | Property | Description |
+ |:|:|
+ | **Application name** | Application name|
+ | **Client name** | Client name|
+ | **Redirect URIs** | Enter your website URL. This attribute is a required field but not used for this flow|
+
+3. Select **Add**.
+
+4. Upon registration, a **Client ID** and **Client Secret** appear. Record the values for use later.
+
+## Step 2: Create your custom UI
+
+Start by integrating Transmit DRS into the B2C frontend application. Create a custom sign-in page that integrates the [Transmit SDK](https://developer.transmitsecurity.com/sdk-ref/platform/introduction/), and replaces the default Azure AD B2C sign-in page.
+
+Once activated, Transmit DRS starts collecting information for the user interacting with your app. Transmit DRS returns an action token that Azure AD B2C needs for risk recommendation.
+
+To integrating Transmit DRS into the B2C sign-in page, follow these steps:
+
+1. Prepare a custom HTML file for your sign-in page based on the [sample templates](./customize-ui-with-html.md#sample-templates). Add the following script to load and initialize the Transmit SDK, and to obtain an action token. The returned action token should be stored in a hidden HTML element (`ts-drs-response` in this example).
+
+ ```html
+ <!-- Function that obtains an action token -->
+ <script>
+ function fill_token() {
+ window.tsPlatform.drs.triggerActionEvent("login").then((actionResponse) => {
+ let actionToken = actionResponse.actionToken;
+ document.getElementById("ts-drs-response").value = actionToken;
+ console.log(actionToken);
+ });
+ }
+ </script>
+
+ <!-- Loads DRS SDK -->
+ <script src="https://platform-websdk.transmitsecurity.io/platform-websdk/latest/ts-platform-websdk.js" defer> </script>
+
+ <!-- Upon page load, initializes DRS SDK and calls the fill_token function -->
+ <script defer>
+ window.onload = function() {
+ if (window.tsPlatform) {
+ // Client ID found in the app settings in Transmit Admin portal
+ window.tsPlatform.initialize({ clientId: "[clientId]" });
+ console.log("Transmit Security platform initialized");
+ fill_token();
+ } else {/
+ console.error("Transmit Security platform failed to load");
+ }
+ };
+ </script>
+ ```
+
+1. [Enable JavaScript and page layout versions in Azure AS B2C](./javascript-and-page-layout.md).
+
+1. Host the HTML page on a Cross-Origin Resource Sharing (CORS) enabled web endpoint by [creating a storage account](../storage/blobs/storage-blobs-introduction.md) and [adding CORS support for Azure Storage](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services).
+
+## Step 3: Create an Azure Function
+
+Azure AD B2C can obtain a risk recommendation from Transmit using a [API connector](./add-api-connector.md). Passing this request through an intermediate web API (such as using [Azure Functions](/azure/azure-functions/)) provides more flexibility in your implementation logic.
+
+Follow these steps to create an Azure function that uses the action token from the frontend application to get a recommendation from the [Transmit DRS endpoint](https://developer.transmitsecurity.com/openapi/risk/recommendations/#operation/getRiskRecommendation).
+
+1. Create the entry point of your Azure Function, an HTTP-triggered function that processes incoming HTTP requests.
+
+ ```csharp
+ public static async Task<HttpResponseMessage> Run(HttpRequest req, ILogger log)
+ {
+ // Function code goes here
+ }
+ ```
+
+2. Extract the action token from the request. Your custom policy defines how to pass the request, in query string parameters or body.
+
+ ```csharp
+ // Checks for the action token in the query string
+ string actionToken = req.Query["actiontoken"];
+
+ // Checks for the action token in the request body
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ actionToken = actionToken ?? data?.actiontoken;
+ ```
+
+3. Validate the action token by checking that the provided value isn't empty or null:
+
+ ```csharp
+ // Returns an error response if the action token is missing
+ if (string.IsNullOrEmpty(actionToken))
+ {
+ var respContent = new { version = "1.0.0", status = (int)HttpStatusCode.BadRequest, userMessage = "Invalid or missing action token" };
+ var json = JsonConvert.SerializeObject(respContent);
+ log.LogInformation(json);
+ return new HttpResponseMessage(HttpStatusCode.BadRequest)
+ {
+ Content = new StringContent(json, Encoding.UTF8, "application/json")
+ };
+ }
+ ```
+
+4. Call the Transmit DRS API. The Transmit Client ID and Client Secret obtained in Step 1 should be used to generate bearer tokens for API authorization. Make sure to add the necessary environment variables (like ClientId and ClientSecret) in your `local.settings.json` file.
+
+ ```csharp
+ HttpClient client = new HttpClient();
+ client.DefaultRequestHeaders.Add("Authorization", $"Bearer {transmitSecurityApiKey}");
+
+ // Add code here that sends this GET request:
+ // https://api.transmitsecurity.io/risk/v1/recommendation?action_token=[YOUR_ACTION_TOKEN]
+
+ HttpResponseMessage response = await client.GetAsync(urlWithActionToken);
+ ```
+
+5. Process the API response. The following code forwards the API response if successful; otherwise, handles any errors.
+
+ ```csharp
+ if (response.IsSuccessStatusCode)
+ {
+ log.LogInformation(responseContent);
+ return new HttpResponseMessage(HttpStatusCode.OK)
+ {
+ Content = new StringContent(responseContent, Encoding.UTF8, "application/json")
+ };
+ }
+ else
+ {
+ var errorContent = new { version = "1.0.0", status = (int)response.StatusCode, userMessage = "Error calling Transmit Security API" };
+ var json = JsonConvert.SerializeObject(errorContent);
+ log.LogError(json);
+ return new HttpResponseMessage(response.StatusCode)
+ {
+ Content = new StringContent(json, Encoding.UTF8, "application/json")
+ };
+ }
+ ```
+
+## Step 4: Configure your custom policies
+
+You incorporate Transmit DRS into your Azure B2C application by extending your custom policies.
+
+1. Download the [custom policy starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) to get started (see [Create custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy))
+
+2. Create a new file that inherits from **TrustFrameworkExtensions**, which extens the base policy with tenant-specific customizations for Transmit DRS.
+
+ ```xml
+ <BasePolicy>
+ <TenantId>YOUR AZURE TENANT</TenantId>
+ <PolicyId>B2C_1A_TrustFrameworkExtensions</PolicyId>
+ </BasePolicy>
+ ```
+
+2. In the `BuildingBlocks` section, define `actiontoken`, `ts-drs-response`, and `ts-drs-recommendation` as claims:
+
+ ```xml
+ <BuildingBlocks>
+ <ClaimsSchema>
+ <ClaimType Id="ts-drs-response">
+ <DisplayName>ts-drs-response</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText>Parameter provided to the DRS service for the response</UserHelpText>
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <ClaimType Id="actiontoken">
+ <DisplayName>actiontoken</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText />
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ <ClaimType Id="ts-drs-recommendation">
+ <DisplayName>recommendation</DisplayName>
+ <DataType>string</DataType>
+ <UserHelpText />
+ <UserInputType>TextBox</UserInputType>
+ </ClaimType>
+ </ClaimsSchema>
+ <BuildingBlocks>
+ ```
+
+3. In the `BuildingBlocks` section, add a reference to your custom UI:
+
+ ```xml
+ <BuildingBlocks>
+ <ClaimsSchema>
+ <!-- your claim schemas-->
+ </ClaimsSchema>
+
+ <ContentDefinitions>
+ <ContentDefinition Id="api.selfasserted">
+ <!-- URL of your hosted custom HTML file-->
+ <LoadUri>YOUR_SIGNIN_PAGE_URL</LoadUri>
+ </ContentDefinition>
+ </ContentDefinitions>
+ </BuildingBlocks>
+ ```
+
+4. In the `ClaimsProviders` section, configure a claims provider that includes the following technical profiles: one (`SelfAsserted-LocalAccountSignin-Email`) that outputs the action token, and another (`login-DRSCheck` in our example) for the Azure function that receives the action token as input and outputs the risk recommendation.
+
+ ```xml
+ <ClaimsProviders>
+ <ClaimsProvider>
+ <DisplayName>Sign in using DRS</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="SelfAsserted-LocalAccountSignin-Email">
+ <DisplayName>Local Account Sign-in</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <Item Key="SignUpTarget">SignUpWithLogonEmailExchange</Item>
+ <Item Key="setting.operatingMode">Email</Item>
+ <Item Key="setting.showSignupLink">true</Item>
+ <Item Key="setting.showCancelButton">false</Item>
+ <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
+ <Item Key="language.button_continue">Sign In</Item>
+ </Metadata>
+ <IncludeInSso>false</IncludeInSso>
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="signInName" />
+ </InputClaims>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="signInName" Required="true" />
+ <OutputClaim ClaimTypeReferenceId="password" Required="true" />
+ <OutputClaim ClaimTypeReferenceId="objectId" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" />
+ <!-- Outputs the action token value provided by the frontend-->
+ <OutputClaim ClaimTypeReferenceId="ts-drs-response" />
+ </OutputClaims>
+ <ValidationTechnicalProfiles>
+ <ValidationTechnicalProfile ReferenceId="login-DRSCheck" />
+ <ValidationTechnicalProfile ReferenceId="login-NonInteractive" />
+ </ValidationTechnicalProfiles>
+ </TechnicalProfile>
+ <TechnicalProfile Id="login-DRSCheck">
+ <DisplayName>DRS check to validate the interaction and device </DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <Metadata>
+ <!-- Azure Function App -->
+ <Item Key="ServiceUrl">YOUR_FUNCTION_URL</Item>
+ <Item Key="AuthenticationType">None</Item>
+ <Item Key="SendClaimsIn">Body</Item>
+ <!-- JSON, Form, Header, and Query String formats supported -->
+ <Item Key="ClaimsFormat">Body</Item>
+ <!-- Defines format to expect claims returning to B2C -->
+ <!-- REMOVE the following line in production environments -->
+ <Item Key="AllowInsecureAuthInProduction">true</Item>
+ </Metadata>
+ <InputClaims>
+ <!-- Receives the action token value as input -->
+ <InputClaim ClaimTypeReferenceId="ts-drs-response" PartnerClaimType="actiontoken" DefaultValue="0" />
+ </InputClaims>
+ <OutputClaims>
+ <!-- Outputs the risk recommendation value returned by Transmit (via the Azure function) -->
+ <OutputClaim ClaimTypeReferenceId="ts-drs-recommendation" PartnerClaimType="recommendation.type" />
+ </OutputClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ </ClaimsProviders>
+ ```
+
+5. In the `UserJourneys` section, create a new user journey (`SignInDRS` in our example) that identifies the user and performs the appropriate identity protection steps based on the Transmit risk recommendation. For example, the journey can proceed normally if Transmit returns **allow** or **trust**, terminate and inform the user of the issue if 'deny', or trigger a step-up authentication process if **challenge**.
+
+```xml
+ <UserJourneys>
+ <UserJourney Id="SignInDRS">
+ <OrchestrationSteps>
+ <!-- Step that identifies the user by email and stores the action token -->
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.selfasserted">
+ <ClaimsProviderSelections>
+ <ClaimsProviderSelection ValidationClaimsExchangeId="LocalAccountSigninEmailExchange" />
+ </ClaimsProviderSelections>
+ <ClaimsExchanges>
+ <ClaimsExchange Id="LocalAccountSigninEmailExchange" TechnicalProfileReferenceId="SelfAsserted-LocalAccountSignin-Email" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- Step to perform DRS check -->
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="DRSCheckExchange" TechnicalProfileReferenceId="login-DRSCheck" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <!-- Conditional step for ACCEPT or TRUST -->
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="false">
+ <Value>ts-drs-recommendation</Value>
+ <Value>ACCEPT</Value>
+ <Value>TRUST</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <!-- Define the ClaimsExchange or other actions for ACCEPT or TRUST -->
+ </OrchestrationStep>
+
+ <!-- Conditional step for CHALLENGE -->
+ <OrchestrationStep Order="4" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="false">
+ <Value>ts-drs-recommendation</Value>
+ <Value>CHALLENGE</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <!-- Define the ClaimsExchange or other actions for CHALLENGE -->
+ </OrchestrationStep>
+
+ <!-- Conditional step for DENY -->
+ <OrchestrationStep Order="5" Type="ClaimsExchange">
+ <Preconditions>
+ <Precondition Type="ClaimEquals" ExecuteActionsIf="false">
+ <Value>ts-drs-recommendation</Value>
+ <Value>DENY</Value>
+ <Action>SkipThisOrchestrationStep</Action>
+ </Precondition>
+ </Preconditions>
+ <!-- Define the ClaimsExchange or other actions for DENY -->
+ </OrchestrationStep>
+ </UserJourney>
+ </UserJourneys>
+```
+
+7. Save the policy file as `DRSTrustFrameworkExtensions.xml`.
+
+8. Create a new file that inherits from the file you saved. It extends the sign-in policy that works as an entry point for the sign-up and sign-in user journeys with Transmit DRS.
+
+ ```xml
+ <BasePolicy>
+ <TenantId>YOUR AZURE TENANT</TenantId>
+ <PolicyId>B2C_1A_DRSTrustFrameworkExtensions</PolicyId>
+ </BasePolicy>
+ ```
+
+9. In the `RelyingParty` section, configure your DRS-enhanced user journey (`SignInDRS` in our example).
+
+ ```xml
+ <RelyingParty>
+ <DefaultUserJourney ReferenceId="SignInDRS" />
+ <UserJourneyBehaviors>
+ <ScriptExecution>Allow</ScriptExecution>
+ </UserJourneyBehaviors>
+ <TechnicalProfile Id="PolicyProfile">
+ <DisplayName>PolicyProfile</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" />
+ <OutputClaim ClaimTypeReferenceId="givenName" />
+ <OutputClaim ClaimTypeReferenceId="surname" />
+ <OutputClaim ClaimTypeReferenceId="email" />
+ <OutputClaim ClaimTypeReferenceId="objectId" PartnerClaimType="sub" />
+ </OutputClaims>
+ <SubjectNamingInfo ClaimType="sub" />
+ </TechnicalProfile>
+ </RelyingParty>
+ ```
+
+9. Save the policy file as `DRSSignIn.xml`.
+
+## Step 5: Upload the custom policy
+
+Using the directory with your Azure AD B2C tenant, upload the custom policy:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the portal toolbar, select **Directories + subscriptions**.
+1. On the **Portal settings | Directories + subscriptions** page, in the **Directory name** list, find the Azure AD B2C directory and then select **Switch**.
+1. Under **Policies**, select **Identity Experience Framework**.
+1. Select **Upload Custom Policy**, and then upload the updated custom policy files.
+
+## Step 6: Test your custom policy
+
+Using the directory with your Azure AD B2C tenant, test your custom policy:
+
+1. In the Azure AD B2C tenant, and under Policies, select Identity Experience Framework.
+2. Under **Custom policies**, select the Sign in policy.
+3. For **Application**, select the web application you registered.
+4. Select **Run now**.
+5. Complete the user flow.
++
+## Next steps
+
+* Ask questions on [Stackoverflow](https://stackoverflow.com/questions/tagged/azure-ad-b2c)
+* Check out the [Azure AD B2C custom policy overview](custom-policy-overview.md)
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
For testing, you register `https://jwt.ms`, a Microsoft web application with dec
Complete [Tutorial: Register a web application in Azure AD B2C](tutorial-register-applications.md?tabs=app-reg-ga)
-## Create a xID policy key
+<a name='create-a-xid-policy-key'></a>
+
+## Create an xID policy key
Store the Client Secret from xID in your Azure AD B2C tenant. For the following instructions, use the directory with the Azure AD B2C tenant.
active-directory-b2c Predicates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/predicates.md
The IsLengthRange method checks whether the length of a string claim value is wi
| Maximum | Yes | The maximum number of characters that can be entered. | | Minimum | Yes | The minimum number of characters that must be entered. |
-The following example shows a IsLengthRange method with the parameters `Minimum` and `Maximum` that specify the length range of the string:
+The following example shows an IsLengthRange method with the parameters `Minimum` and `Maximum` that specify the length range of the string:
```xml <Predicate Id="IsLengthBetween8And64" Method="IsLengthRange" HelpText="The password must be between 8 and 64 characters.">
active-directory-b2c Register Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/register-apps.md
You can register different app types in your Azure AD B2C Tenant. The how-to gui
- [Microsoft Graph application](microsoft-graph-get-started.md) - [SAML application](saml-service-provider.md?tabs=windows&pivots=b2c-custom-policy) - [Publish app in Microsoft Entra app gallery](publish-app-to-azure-ad-app-gallery.md)
-
-
-
-
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
The following example demonstrates the use of a self-asserted technical profile
<UseTechnicalProfileForSessionManagement ReferenceId="SM-AAD" /> </TechnicalProfile> ```-
+> [!NOTE]
+> When you collect the password claim value in the self-asserted technical profile, that value is only available within the same technical profile or within a validation technical profiles that are referenced by that same self-asserted technical profile. When execution of that self-asserted technical profile completes, and moves to another technical profile, the password's value is lost. Consequently, password claim can only be stored in the orchestration step in which it is collected.
### Output claims sign-up or sign-in page In a combined sign-up and sign-in page, note the following when using a content definition [DataUri](contentdefinitions.md#datauri) element that specifies a `unifiedssp` or `unifiedssd` page type:
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 01/11/2024 Last updated : 05/11/2024 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|String Limit per Attribute |250 Chars | |Number of B2C tenants per subscription |20 | |Total number of objects (user accounts and applications) per tenant (default limit)|1.25 million |
-|Total number of objects (user accounts and applications) per tenant (using a verified custom domain)|5.25 million |
+|Total number of objects (user accounts and applications) per tenant (using a verified custom domain). If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md).|5.25 million |
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Determines whether a claim value is equal to the input parameter value. Check ou
| - | -- | | -- | | InputClaim | inputClaim1 | string | The claim's type, which is to be compared. | | InputParameter | operator | string | Possible values: `EQUAL` or `NOT EQUAL`. |
-| InputParameter | compareTo | string | String comparison, one of the values: Ordinal, OrdinalIgnoreCase. |
+| InputParameter | compareTo | string | String comparison, one of the values, that is, the string to which the input claim values must be compared to: Ordinal, OrdinalIgnoreCase. |
| InputParameter | ignoreCase | string | Specifies whether this comparison should ignore the case of the strings being compared. | | OutputClaim | outputClaim | boolean | The claim that is produced after this claims transformation has been invoked. |
active-directory-b2c Tenant Management Read Tenant Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-read-tenant-name.md
To get your Azure AD B2C tenant ID, follow these steps:
## Next steps -- [Clean up resources and delete tenant](tutorial-delete-tenant.md)
+- [Clean up resources and delete tenant](tutorial-delete-tenant.md)
active-directory-b2c Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot.md
Your application needs to handle certain errors coming from Azure B2C service. T
This error occurs when the [self-service password reset experience](add-password-reset-policy.md#self-service-password-reset-recommended) isn't enabled in a user flow. Thus, selecting the **Forgot your password?** link doesn't trigger a password reset user flow. Instead, the error code `AADB2C90118` is returned to your application. There are 2 solutions to this problem:
- - Respond back with a new authentication request using Azure AD B2C password reset user flow.
+- Respond back with a new authentication request using Azure AD B2C password reset user flow.
- Use recommended [self service password reset (SSPR) experience](add-password-reset-policy.md#self-service-password-reset-recommended).
You can also trace the exchange of messages between your client browser and Azur
## Troubleshoot policy validity
-After you finish developing your policy, you upload the policy to Azure AD B2C. There might be some issues with your policy, but you can validity your policy before you upload it.
+After you finish developing your policy, you upload the policy to Azure AD B2C. There might be some issues with your policy, but you can validate your policy before you upload it.
The most common error in setting up custom policies is improperly formatted XML. A good XML editor is nearly essential. It displays XML natively, color-codes content, prefills common terms, keeps XML elements indexed, and can validate against an XML schema.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md) - Updated instructional steps - [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md) - Editorial updates - [Localization string IDs](localization-string-ids.md) - Updated the localization string IDs-
advisor Advisor How To Calculate Total Cost Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-calculate-total-cost-savings.md
Title: Export cost savings in Azure Advisor
+ Title: Calculate cost savings in Azure Advisor
Last updated 02/06/2024 description: Export cost savings in Azure Advisor and calculate the aggregated potential yearly savings by using the cost savings amount for each recommendation.
-# Export cost savings
+# Calculate cost savings
+
+This article provides guidance on how to calculate total cost savings in Azure Advisor.
+
+## Export cost savings for recommendations
To calculate aggregated potential yearly savings, follow these steps:
The Advisor **Overview** page opens.
[![Screenshot of the Azure Advisor cost recommendations page that shows download option.](./media/advisor-how-to-calculate-total-cost-savings.png)](./media/advisor-how-to-calculate-total-cost-savings.png#lightbox) > [!NOTE]
-> Recommendations show savings individually, and may overlap with the savings shown in other recommendations, for example ΓÇô you can only benefit from savings plans for compute or reservations for virtual machines, but not from both.
+> Different types of cost savings recommendations are generated using overlapping datasets (for example, VM rightsizing/shutdown, VM reservations and savings plan recommendations all consider on-demand VM usage). As a result, resource changes (e.g., VM shutdowns) or reservation/savings plan purchases will impact on-demand usage, and the resulting recommendations and associated savings forecast.
+
+## Understand cost savings
+
+Azure Advisor provides recommendations for resizing/shutting down underutilized resources, purchasing compute reserved instances, and savings plans for compute.
+
+These recommendations contain one or more calls-to-action and forecasted savings from following the recommendations. Recommendations should be followed in a specific order: rightsizing/shutdown, followed by reservation purchases, and finally, the savings plan purchase. This sequence allows each step to impact the subsequent ones positively.
+
+For example, rightsizing or shutting down resources reduces on-demand costs immediately. This change in your usage pattern essentially invalidates your existing reservation and savings plan recommendations, as they were based on your pre-rightsizing usage and costs. Updated reservation and savings plan recommendations (and their forecasted savings) should appear within three days.
+The forecasted savings from reservations and savings plans are based on actual rates and usage, while the forecasted savings from rightsizing/shutdown are based on retail rates. The actual savings may vary depending on the usage patterns and rates. Assuming there are no material changes to your usage patterns, your actual savings from reservations and savings plan should be in line with the forecasts. Savings from rightsizing/shutdown vary based on your actual rates. This is important if you intend to track cost savings forecasts from Azure Advisor.
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Volume - Azure NetApp Files AZ Volume SDK version recommendati
The minimum SDK version of 2022-05-01 is recommended for the Azure NetApp Files Cross Zone Replication feature, to enable you to replicate volumes across availability zones within the same region.
-Learn more about [Volume - Azure NetApp Files Cross Zone Replication SDK recommendation (Cross Zone Replication SDK recommendation)](https://aka.ms/anf-sdkversion).
+Learn more about [Volume - Azure NetApp Files Cross Zone Replication SDK recommendation](https://aka.ms/anf-sdkversion).
### Volume Encryption using Customer Managed Keys with Azure Key Vault SDK Recommendation
Learn more about [Capacity Pool - Azure NetApp Files Cool Access SDK version rec
The minimum SDK version of 2022-xx-xx is recommended for automation of large volume creation, resizing and deletion.
-Learn more about [Volume - Large Volumes SDK Recommendation (Large Volumes SDK Recommendation)](/azure/azure-netapp-files/azure-netapp-files-resource-limits).
+Learn more about [Volume - Large Volumes SDK Recommendation](/azure/azure-netapp-files/azure-netapp-files-resource-limits).
### Prevent hitting subscription limit for maximum storage accounts
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
Title: What's new in Azure Advisor description: A description of what's new and changed in Azure Advisor Previously updated : 11/02/2023 Last updated : 05/03/2024 # What's new in Azure Advisor? Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## April 2024
+
+### Azure Advisor will no longer display aggregated potential yearly savings beginning 30 September 2024
+
+In the Azure portal, Azure Advisor currently shows potential aggregated cost savings under the label "Potential yearly savings based on retail pricing" on pages where cost recommendations are displayed (as shown in the image). This aggregated savings estimate will be removed from the Azure portal on 30 September 2024. However, you can still evaluate potential yearly savings tailored to your specific needs by following the steps in [Calculate cost savings](/azure/advisor/advisor-how-to-calculate-total-cost-savings). All individual recommendations and their associated potential savings will remain available.
+
+#### Recommended action
+
+If you want to continue calculating aggregated potential yearly savings, follow [these steps](/azure/advisor/advisor-how-to-calculate-total-cost-savings). Note that individual recommendations might show savings that overlap with the savings shown in other recommendations, although you might not be able to benefit from them concurrently. For example, you can benefit from savings plans or from reservations for virtual machines, but not typically from both on the same virtual machines.
+
+### Public Preview: Resiliency Review on Azure Advisor
+
+Recommendations from WAF Reliability reviews in Advisor help you focus on the most important recommendations to ensure your workloads remain resilient. As part of the review, personalized and prioritized recommendations from Microsoft Cloud Solution Architects will be presented to you and your team. You can triage recommendations (accept or reject), manage their lifecycle on Advisor, and work with your Microsoft account team to track resolution. You can reach out to your account team to request Well Architected Reliability Assessment to successfully optimize workload resiliency and reliability by implementing curated recommendations and track its lifecycle on Advisor.
+
+To learn more, visit [Azure Advisor Resiliency Reviews](/azure/advisor/advisor-resiliency-reviews).
+ ## March 2024 ### Well-Architected Framework (WAF) assessments & recommendations
If you're interested in workload based recommendations, reach out to your accoun
### Cost Optimization workbook template
-The Azure Cost Optimization workbook serves as a centralized hub for some of the most used tools that can help you drive utilization and efficiency goals. It offers a range of recommendations, including Azure Advisor cost recommendations, identification of idle resources, and management of improperly deallocated Virtual Machines. Additionally, it provides insights into leveraging Azure Hybrid benefit options for Windows, Linux, and SQL databases
+The Azure Cost Optimization workbook serves as a centralized hub for some of the most used tools that can help you drive utilization and efficiency goals. It offers a range of recommendations, including Azure Advisor cost recommendations, identification of idle resources, and management of improperly deallocated Virtual Machines. Additionally, it provides insights into leveraging Azure Hybrid benefit options for Windows, Linux, and SQL databases.
To learn more, visit [Understand and optimize your Azure costs using the Cost Optimization workbook](/azure/advisor/advisor-cost-optimization-workbook).
To learn more, visit [Prepare migration of your workloads impacted by service re
Azure Advisor now provides the option to postpone or dismiss a recommendation for multiple resources at once. Once you open a recommendations details page with a list of recommendations and associated resources, select the relevant resources and choose **Postpone** or **Dismiss** in the command bar at the top of the page.
-To learn more, visit [Dismissing and postponing recommendations](/azure/advisor/view-recommendations#dismissing-and-postponing-recommendations)
+To learn more, visit [Dismissing and postponing recommendations](/azure/advisor/view-recommendations#dismissing-and-postponing-recommendations).
### VM/VMSS right-sizing recommendations with custom lookback period
To learn more, visit [Azure Advisor for MySQL](/azure/mysql/single-server/concep
### Unlimited number of subscriptions
-It is easier now to get an overview of optimization opportunities available to your organization ΓÇô no need to spend time and effort to apply filters and process subscription in batches.
+It's easier now to get an overview of optimization opportunities available to your organization ΓÇô no need to spend time and effort to apply filters and process subscription in batches.
To learn more, visit [Get started with Azure Advisor](advisor-get-started.md).
advisor Advisor Resiliency Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-resiliency-reviews.md
You can manage access to Advisor personalized recommendations using the followin
| **Name** | **Description** | ||::| |Subscription Reader|View reviews for a workload and recommendations linked to them.|
-|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage review recommendation lifecycle.|
-|Advisor Recommendations Contributor (Assessments and Reviews)|View review recommendations, accept review recommendations, manage review recommendations' lifecycle.|
+|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage the recommendation lifecycle.|
+|Advisor Recommendations Contributor (Assessments and Reviews)|View accepted recommendations, and manage the recommendation lifecycle.|
You can find detailed instructions on how to assign a role using the Azure portal - [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition). Additional information is available in [Steps to assign an Azure role - Azure RBAC](/azure/role-based-access-control/role-assignments-steps).
advisor Advisor Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-security-recommendations.md
To learn more about Advisor recommendations, see:
* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md) * [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md) * [Advisor REST API](/rest/api/advisor/)
-
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
# Azure Resource Graph sample queries for Azure Advisor
-This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md)
-sample queries for Azure Advisor. For a complete list of Azure Resource Graph samples, see
-[Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md)
-and [Resource Graph samples by Table](../governance/resource-graph/samples/samples-by-table.md).
+This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Advisor. For a complete list of Azure Resource Graph samples, see [Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md) and [Resource Graph samples by Table](../governance/resource-graph/samples/samples-by-table.md).
## Sample queries
ai-services App Schema Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/app-schema-definition.md
When you import and export the app, choose either `.json` or `.lu`.
* Moving to version 7.x, the entities are represented as nested machine-learning entities. * Support for authoring nested machine-learning entities with `enableNestedChildren` property on the following authoring APIs:
- * [Add label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c08)
- * [Add batch label](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c09)
- * [Review labels](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c0a)
- * [Suggest endpoint queries for entities](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2e)
- * [Suggest endpoint queries for intents](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2d)
-
+ * Add label
+ * Add batch label
+ * Review labels
+ * Suggest endpoint queries for entities
+ * Suggest endpoint queries for intents
+ For more information, see the [LUIS reference documentation](/rest/api/cognitiveservices-luis/authoring/features?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true).
```json { "luis_schema_version": "7.0.0",
ai-services Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/utterances.md
When you start [adding example utterances](../how-to/entities.md) to your LUIS
## Utterances aren't always well formed
-Your app may need to process sentences, like "Book a ticket to Paris for me", or a fragment of a sentence, like "Booking" or "Paris flight" Users also often make spelling mistakes. When planning your app, consider whether or not you want to use [Bing Spell Check](../luis-tutorial-bing-spellcheck.md) to correct user input before passing it to LUIS.
+Your app might need to process sentences, like "Book a ticket to Paris for me," or a fragment of a sentence, like "Booking" or "Paris flight" Users also often make spelling mistakes. When planning your app, consider whether or not you want to use [Bing Spell Check](../luis-tutorial-bing-spellcheck.md) to correct user input before passing it to LUIS.
-If you do not spell check user utterances, you should train LUIS on utterances that include typos and misspellings.
+If you don't spell check user utterances, you should train LUIS on utterances that include typos and misspellings.
### Use the representative language of the user
-When choosing utterances, be aware that what you think are common terms or phrases might not be common for the typical user of your client application. They may not have domain experience or use different terminology. Be careful when using terms or phrases that a user would only say if they were an expert.
+When choosing utterances, be aware that what you think are common terms or phrases might not be common for the typical user of your client application. They might not have domain experience or use different terminology. Be careful when using terms or phrases that a user would only say if they were an expert.
### Choose varied terminology and phrasing
-You will find that even if you make efforts to create varied sentence patterns, you will still repeat some vocabulary. For example, the following utterances have similar meaning, but different terminology and phrasing:
+You'll find that even if you make efforts to create varied sentence patterns, you'll still repeat some vocabulary. For example, the following utterances have similar meaning, but different terminology and phrasing:
* "*How do I get a computer?*" * "*Where do I get a computer?*"
The core term here, _computer_, isn't varied. Use alternatives such as desktop c
## Example utterances in each intent
-Each intent needs to have example utterances - at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or few example utterances, LUIS may not accurately predict the intent.
+Each intent needs to have example utterances - at least 15. If you have an intent that doesn't have any example utterances, you will not be able to train LUIS. If you have an intent with one or few example utterances, LUIS might not accurately predict the intent.
## Add small groups of utterances
Each time you iterate on your model to improve it, don't add large quantities of
LUIS builds effective models with utterances that are carefully selected by the LUIS model author. Adding too many utterances isn't valuable because it introduces confusion.
-It is better to start with a few utterances, then [review the endpoint utterances](../how-to/improve-application.md) for correct intent prediction and entity extraction.
+It's better to start with a few utterances, then [review the endpoint utterances](../how-to/improve-application.md) for correct intent prediction and entity extraction.
## Utterance normalization
If you turn on a normalization setting, scores in the **Test** pane, batch tes
When you clone a version in the LUIS portal, the version settings are kept in the new cloned version.
-Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings). See the [Reference](../luis-reference-application-settings.md) documentation for more information.
+Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](/rest/api/cognitiveservices-luis/authoring/versions/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). See the [Reference](../luis-reference-application-settings.md) documentation for more information.
## Word forms
Diacritics are marks or signs within the text, such as:
Normalizing **punctuation** means that before your models get trained and before your endpoint queries get predicted, punctuation will be removed from the utterances.
-Punctuation is a separate token in LUIS. An utterance that contains a period at the end is a separate utterance than one that does not contain a period at the end, and may get two different predictions.
+Punctuation is a separate token in LUIS. An utterance that contains a period at the end is a separate utterance than one that doesn't contain a period at the end, and might get two different predictions.
-If punctuation is not normalized, LUIS doesn't ignore punctuation marks by default because some client applications may place significance on these marks. Make sure to include example utterances that use punctuation, and ones that don't, for both styles to return the same relative scores.
+If punctuation isn't normalized, LUIS doesn't ignore punctuation marks by default because some client applications might place significance on these marks. Make sure to include example utterances that use punctuation, and ones that don't, for both styles to return the same relative scores.
Make sure the model handles punctuation either in the example utterances (both having and not having punctuation) or in [patterns](../concepts/patterns-features.md) where it is easier to ignore punctuation. For example: I am applying for the {Job} position[.]
If you want to ignore specific words or punctuation in patterns, use a [pattern]
## Training with all utterances
-Training is generally non-deterministic: utterance prediction can vary slightly across versions or apps. You can remove non-deterministic training by updating the [version settings](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) API with the UseAllTrainingData name/value pair to use all training data.
+Training is nondeterministic: utterance prediction can vary slightly across versions or apps. You can remove nondeterministic training by updating the [version settings](/rest/api/cognitiveservices-luis/authoring/settings/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API with the UseAllTrainingData name/value pair to use all training data.
## Testing utterances
-Developers should start testing their LUIS application with real data by sending utterances to the [prediction endpoint](../luis-how-to-azure-subscription.md) URL. These utterances are used to improve the performance of the intents and entities with [Review utterances](../how-to/improve-application.md). Tests submitted using the testing pane in the LUIS portal are not sent through the endpoint, and don't contribute to active learning.
+Developers should start testing their LUIS application with real data by sending utterances to the [prediction endpoint](../luis-how-to-azure-subscription.md) URL. These utterances are used to improve the performance of the intents and entities with [Review utterances](../how-to/improve-application.md). Tests submitted using the testing pane in the LUIS portal aren't sent through the endpoint, and don't contribute to active learning.
## Review utterances
After your model is trained, published, and receiving [endpoint](../luis-glossar
### Label for word meaning
-If the word choice or word arrangement is the same, but doesn't mean the same thing, do not label it with the entity.
+If the word choice or word arrangement is the same, but doesn't mean the same thing, don't label it with the entity.
In the following utterances, the word fair is a homograph, which means it's spelled the same but has a different meaning:
-* "*What kind of county fairs are happening in the Seattle area this summer?*"
+* "*What kinds of county fairs are happening in the Seattle area this summer?*"
* "*Is the current 2-star rating for the restaurant fair?* If you want an event entity to find all event data, label the word fair in the first utterance, but not in the second.
LUIS expects variations in an intent's utterances. The utterances can vary while
| Don't use the same format | Do use varying formats | |--|--| | Buy a ticket to Seattle|Buy 1 ticket to Seattle|
-|Buy a ticket to Paris|Reserve two seats on the red eye to Paris next Monday|
+|Buy a ticket to Paris|Reserve two tickets on the red eye to Paris next Monday|
|Buy a ticket to Orlando |I would like to book 3 tickets to Orlando for spring break |
ai-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md
Both authoring and prediction endpoint APIS are available from REST APIs:
|Type|Version| |--|--|
-|Authoring|[V2](https://go.microsoft.com/fwlink/?linkid=2092087)<br>[preview V3](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview)|
-|Prediction|[V2](https://go.microsoft.com/fwlink/?linkid=2092356)<br>[V3](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/)|
+|Authoring|[V2](https://go.microsoft.com/fwlink/?linkid=2092087)<br>[preview V3](/rest/api/cognitiveservices-luis/authoring/operation-groups)|
+|Prediction|[V2](https://go.microsoft.com/fwlink/?linkid=2092356)<br>[V3](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)|
### REST Endpoints
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/faq.md
Title: LUIS frequently asked questions
-description: Use this article to see frequently asked questions about LUIS, and troubleshooting information
+description: Use this article to see frequently asked questions about LUIS, and troubleshooting information.
Yes, [Speech](../speech-service/how-to-recognize-intents-from-speech-csharp.md#l
## What are Synonyms and word variations?
-LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they are used in similar contexts in the examples provided:
+LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they're used in similar contexts in the examples provided:
* Buy * Buying * Bought
-For semantic similarity Natural Language Understanding (NLU), you can use [Conversation Language Understanding](../language-service/conversational-language-understanding/overview.md)
+For semantic similarity Natural Language Understanding (NLU), you can use [Conversation Language Understanding](../language-service/conversational-language-understanding/overview.md).
## What are the Authoring and prediction pricing?
-Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint, each has their own pricing. See [Resource usage and limits](luis-limits.md#resource-usage-and-limits)
+Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint, each has their own pricing. See [Resource usage and limits](luis-limits.md#resource-usage-and-limits).
## What are the supported regions?
-See [region support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services)
+See [region support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services).
## How does LUIS store data?
-LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.See [Data retention](luis-concept-data-storage.md) to know more details about data storage
+LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted. See [Data retention](luis-concept-data-storage.md) for more details about data storage.
## Does LUIS support Customer-Managed Keys (CMK)?
Use one of the following solutions:
## Why is my app is getting different scores every time I train?
-Enable or disable the use non-deterministic training option. When disabled, training will use all available data. When enabled (by default), training will use a random sample each time the app is trained, to be used as a negative for the intent. To make sure that you are getting same scores every time, make sure you train your LUIS app with all your data. See the [training article](how-to/train-test.md#change-deterministic-training-settings-using-the-version-settings-api) for more information.
+Enable or disable the use nondeterministic training option. When disabled, training will use all available data. When enabled (by default), training will use a random sample each time the app is trained, to be used as a negative for the intent. To make sure that you are getting same scores every time, make sure you train your LUIS app with all your data. See the [training article](how-to/train-test.md#change-deterministic-training-settings-using-the-version-settings-api) for more information.
## I received an HTTP 403 error status code. How do I fix it? Can I handle more requests per second?
To get the same top intent between all the apps, make sure the intent prediction
When training these apps, make sure to [train with all data](how-to/train-test.md).
-Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md) website or the authoring API for a [single utterance](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c08) or for a [batch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09).
+Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) website or the authoring API for a [single utterance](/rest/api/cognitiveservices-luis/authoring/examples/add) or for a [batch](/rest/api/cognitiveservices-luis/authoring/examples/batch?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true).
Schedule a periodic review, such as every two weeks, of [endpoint utterances](how-to/improve-application.md) for active learning, then retrain and republish the app.
ai-services Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/sign-in.md
Last updated 01/19/2024
[!INCLUDE [deprecation notice](../includes/deprecation-notice.md)]
-Use this article to get started with the LUIS portal, and create an authoring resource. After completing the steps in this article, you will be able to create and publish LUIS apps.
+Use this article to get started with the LUIS portal, and create an authoring resource. After completing the steps in this article, you'll be able to create and publish LUIS apps.
## Access the portal
-1. To get started with LUIS, go to the [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription, you will be prompted to go create a [free account](https://azure.microsoft.com/free/cognitive-services/) and return back to the portal.
+1. To get started with LUIS, go to the [LUIS Portal](https://www.luis.ai/). If you don't already have a subscription, you'll be prompted to go create a [free account](https://azure.microsoft.com/free/cognitive-services/) and return back to the portal.
2. Refresh the page to update it with your newly created subscription 3. Select your subscription from the dropdown list :::image type="content" source="../media/migrate-authoring-key/select-subscription-sign-in-2.png" alt-text="A screenshot showing how to select a subscription." lightbox="../media/migrate-authoring-key/select-subscription-sign-in-2.png":::
-4. If your subscription lives under another tenant, you will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar containing your initials in the top-right section of the screen. Select **Choose a different authoring resource** from the top to reopen the window.
+4. If your subscription lives under another tenant, you won't be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar containing your initials in the top-right section of the screen. Select **Choose a different authoring resource** from the top to reopen the window.
:::image type="content" source="../media/migrate-authoring-key/switch-directories.png" alt-text="A screenshot showing how to choose a different authoring resource." lightbox="../media/migrate-authoring-key/switch-directories.png":::
Use this article to get started with the LUIS portal, and create an authoring re
:::image type="content" source="../media/migrate-authoring-key/create-new-authoring-resource-2.png" alt-text="A screenshot showing the page for adding resource information." lightbox="../media/migrate-authoring-key/create-new-authoring-resource-2.png":::
-* **Tenant Name** - the tenant your Azure subscription is associated with. You will not be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar at the top-right corner of the screen, containing your initials. Select **Choose a different authoring resource** from the top to reopen the window.
-* **Azure Resource group name** - a custom resource group name you choose in your subscription. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
+* **Tenant Name** - the tenant your Azure subscription is associated with. You won't be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar at the top-right corner of the screen, containing your initials. Select **Choose a different authoring resource** from the top to reopen the window.
+* **Azure Resource group name** - a custom resource group name you choose in your subscription. Resource groups allow you to group Azure resources for access and management. If you currently don't have a resource group in your subscription, you won't be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
* **Azure Resource name** - a custom name you choose, used as part of the URL for your authoring transactions. Your resource name can only include alphanumeric characters, `-`, and can't start or end with `-`. If any other symbols are included in the name, creating a resource will fail.
-* **Location** - Choose to author your applications in one of the [three authoring locations](../luis-reference-regions.md) that are currently supported by LUIS including: West US, West Europe and East Australia
+* **Location** - Choose to author your applications in one of the [three authoring locations](../luis-reference-regions.md) that are currently supported by LUIS including: West US, West Europe, and East Australia
* **Pricing tier** - By default, F0 authoring pricing tier is selected as it is the recommended. Create a [customer managed key](../encrypt-data-at-rest.md) from the Azure portal if you are looking for an extra layer of security. 8. Now you have successfully signed in to LUIS. You can now start creating applications.
There are a couple of ways to create a LUIS app. You can create a LUIS app in th
* Import a LUIS app from a .lu or .json file that already contains intents, utterances, and entities. **Using the authoring APIs** You can create a new app with the authoring APIs in a couple of ways:
-* [Add application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) - start with an empty app and create intents, utterances, and entities.
-* [Add prebuilt application](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/59104e515aca2f0b48c76be5) - start with a prebuilt domain, including intents, utterances, and entities.
+* [Add application](/rest/api/cognitiveservices-luis/authoring/apps/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - start with an empty app and create intents, utterances, and entities.
+* [Add prebuilt application](/rest/api/cognitiveservices-luis/authoring/apps/add-custom-prebuilt-domain?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - start with a prebuilt domain, including intents, utterances, and entities.
## Create new app in LUIS using portal 1. On **My Apps** page, select your **Subscription** , and **Authoring resource** then select **+ New App**.
ai-services Train Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/train-test.md
To train your app in the LUIS portal, you only need to select the **Train** butt
Training with the REST APIs is a two-step process.
-1. Send an HTTP POST [request for training](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c45).
-2. Request the [training status](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c46) with an HTTP GET request.
+1. Send an HTTP POST [request for training](/rest/api/cognitiveservices-luis/authoring/train/train-version?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true).
+2. Request the [training status](/rest/api/cognitiveservices-luis/authoring/train/get-status?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) with an HTTP GET request.
In order to know when training is complete, you must poll the status until all models are successfully trained.
Inspect the test result details in the **Inspect** panel.
## Change deterministic training settings using the version settings API
-Use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the UseAllTrainingData set to *true* to turn off deterministic training.
+Use the [Version settings API](/rest/api/cognitiveservices-luis/authoring/settings/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) with the UseAllTrainingData set to *true* to turn off deterministic training.
## Change deterministic training settings using the LUIS portal
ai-services Luis Concept Devops Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-testing.md
When LUIS is training a model, such as an intent, it needs both positive data -
The result of this non-deterministic training is that you may get a slightly [different prediction response between different training sessions](./luis-concept-prediction-score.md), usually for intents and/or entities where the [prediction score](./luis-concept-prediction-score.md) is not high.
-If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/versions-update-application-version-settings) with the `UseAllTrainingData` setting set to `true`.
+If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](/rest/api/cognitiveservices-luis/authoring/versions?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) with the `UseAllTrainingData` setting set to `true`.
## Next steps
ai-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md
You can get your authoring key from the [LUIS portal](https://www.luis.ai/) by c
Authoring APIs for packaged apps:
-* [Published package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip)
-* [Not-published, trained-only package API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip)
+* [Published package API](/rest/api/cognitiveservices-luis/authoring/apps/package-published-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+* [Not-published, trained-only package API](/rest/api/cognitiveservices-luis/authoring/apps/package-trained-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
### The host computer
Once the container is on the [host computer](#the-host-computer), use the follow
1. When you are done with the container, [import the endpoint logs](#import-the-endpoint-logs-for-active-learning) from the output mount in the LUIS portal and [stop](#stop-the-container) the container. 1. Use LUIS portal's [active learning](how-to/improve-application.md) on the **Review endpoint utterances** page to improve the app.
-The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f). Then train and/or publish, then download a new package and run the container again.
+The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true). Then train and/or publish, then download a new package and run the container again.
The LUIS app inside the container can't be exported back to the LUIS service. Only the query logs can be uploaded.
The container provides REST-based query prediction endpoint APIs. Endpoints for
Use the host, `http://localhost:5000`, for container APIs.
-# [V3 prediction endpoint](#tab/v3)
- |Package type|HTTP verb|Route|Query parameters| |--|--|--|--| |Published|GET, POST|`/luis/v3.0/apps/{appId}/slots/{slotName}/predict?` `/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]|
The query parameters configure how and what is returned in the query response:
|`log`|boolean|Logs queries, which can be used later for [active learning](how-to/improve-application.md). Default is false.| |`show-all-intents`|boolean|A boolean value indicating whether to return all the intents or the top scoring intent only. Default is false.|
-# [V2 prediction endpoint](#tab/v2)
-
-|Package type|HTTP verb|Route|Query parameters|
-|--|--|--|--|
-|Published|[GET](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78), [POST](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee79)|`/luis/v2.0/apps/{appId}?`|`q={q}`<br>`&staging`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]<br>|
-|Versioned|GET, POST|`/luis/v2.0/apps/{appId}/versions/{versionId}?`|`q={q}`<br>[`&timezoneOffset`]<br>[`&verbose`]<br>[`&log`]|
-
-The query parameters configure how and what is returned in the query response:
-
-|Query parameter|Type|Purpose|
-|--|--|--|
-|`q`|string|The user's utterance.|
-|`timezoneOffset`|number|The timezoneOffset allows you to [change the timezone](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity) used by the prebuilt entity datetimeV2.|
-|`verbose`|boolean|Returns all intents and their scores when set to true. Default is false, which returns only the top intent.|
-|`staging`|boolean|Returns query from staging environment results if set to true. |
-|`log`|boolean|Logs queries, which can be used later for [active learning](how-to/improve-application.md). Default is true.|
-
-***
### Query the LUIS app
In this article, you learned concepts and workflow for downloading, installing,
* Use more [Azure AI containers](../cognitive-services-container-support.md) <!-- Links - external -->
-[download-published-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagepublishedapplicationasgzip
-[download-versioned-package]: https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-packagetrainedapplicationasgzip
+[download-published-package]: /rest/api/cognitiveservices-luis/authoring/apps/package-published-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true
+[download-versioned-package]: /rest/api/cognitiveservices-luis/authoring/apps/package-trained-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true
[unsupported-dependencies]: luis-container-limitations.md#unsupported-dependencies-for-latest-container
ai-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-get-intent-from-browser.md
Last updated 01/19/2024
-#Customer intent: As an developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
+#Customer intent: As a developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response.
# How to query the prediction runtime with user text
ai-services Luis Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-glossary.md
The Language Understanding (LUIS) glossary explains terms that you might encount
## Active version
-The active version is the [version](luis-how-to-manage-versions.md) of your app that is updated when you make changes to the model using the LUIS portal. In the LUIS portal, if you want to make changes to a version that is not the active version, you need to first set that version as active.
+The active version is the [version](luis-how-to-manage-versions.md) of your app that is updated when you make changes to the model using the LUIS portal. In the LUIS portal, if you want to make changes to a version that isn't the active version, you need to first set that version as active.
## Active learning
See also:
## Application (App)
-In LUIS, your application, or app, is a collection of machine learned models, built on the same data set, that works together to predict intents and entities for a particular scenario. Each application has a separate prediction endpoint.
+In LUIS, your application, or app, is a collection of machine-learned models, built on the same data set, that works together to predict intents and entities for a particular scenario. Each application has a separate prediction endpoint.
If you are building an HR bot, you might have a set of intents, such as "Schedule leave time", "inquire about benefits" and "update personal information" and entities for each one of those intents that you group into a single application.
An example for an animal batch test is the number of sheep that were predicted d
### True negative (TN)
-A true negative is when your app correctly predicts no match. In batch testing, a true negative occurs when your app does predict an intent or entity for an example that has not been labeled with that intent or entity.
+A true negative is when your app correctly predicts no match. In batch testing, a true negative occurs when your app does predict an intent or entity for an example that hasn't been labeled with that intent or entity.
### True positive (TP)
A collaborator is conceptually the same thing as a [contributor](#contributor).
## Contributor
-A contributor is not the [owner](#owner) of the app, but has the same permissions to add, edit, and delete the intents, entities, utterances. A contributor provides Azure role-based access control (Azure RBAC) to a LUIS app.
+A contributor isn't the [owner](#owner) of the app, but has the same permissions to add, edit, and delete the intents, entities, utterances. A contributor provides Azure role-based access control (Azure RBAC) to a LUIS app.
See also: * [How-to](luis-how-to-collaborate.md#add-contributor-to-azure-authoring-resource) add contributors
Learn more about authoring your app programmatically from the [Developer referen
### Prediction endpoint
-The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c37) API.
+The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](/rest/api/cognitiveservices-luis/authoring/apps/get?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API.
Your access to the prediction endpoint is authorized with the LUIS prediction key. ## Entity
-[Entities](concepts/entities.md) are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want you model to predict an address, but also the subentities of street, city, state, and zipcode. Entities can also be used as features to models. Your response from the LUIS app will include both the predicted intents and all the entities.
+[Entities](concepts/entities.md) are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode. Entities can also be used as features to models. Your response from the LUIS app includes both the predicted intents and all the entities.
### Entity extractor
An entity that uses text matching to extract data:
A [list entity](reference-entity-list.md) represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities.
-The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small", "medium", or "large" are used regardless of the context.
+The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small," "medium," or "large" are used regardless of the context.
### Regular expression A [regular expression entity](reference-entity-regular-expression.md) represents a regular expression. Regular expression entities are exact matches, unlike machined learned entities. ### Prebuilt entity
-See Prebuilt model's entry for [prebuilt entity](#prebuilt-entity)
+See Prebuilt model's entry for [prebuilt entity](#prebuilt-entity).
## Features
In machine learning, a feature is a characteristic that helps the model recogniz
This term is also referred to as a **[machine-learning feature](concepts/patterns-features.md)**.
-These hints are used in conjunction with the labels to learn how to predict new data. LUIS supports both phrase lists and using other models as features.
+These hints are used with the labels to learn how to predict new data. LUIS supports both phrase lists and using other models as features.
### Required feature A required feature is a way to constrain the output of a LUIS model. When a feature for an entity is marked as required, the feature must be present in the example for the entity to be predicted, regardless of what the machine learned model predicts.
-Consider an example where you have a prebuilt-number feature that you have marked as required on the quantity entity for a menu ordering bot. When your bot sees `I want a bajillion large pizzas?`, bajillion will not be predicted as a quantity regardless of the context in which it appears. Bajillion is not a valid number and wonΓÇÖt be predicted by the number pre-built entity.
+Consider an example where you have a prebuilt-number feature that you have marked as required on the quantity entity for a menu ordering bot. When your bot sees `I want a bajillion large pizzas?`, bajillion will not be predicted as a quantity regardless of the context in which it appears. Bajillion isn't a valid number and wonΓÇÖt be predicted by the number prebuilt entity.
## Intent
-An [intent](concepts/intents.md) represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill. In LUIS, an utterance as a whole is classified as an intent, but parts of the utterance are extracted as entities
+An [intent](concepts/intents.md) represents a task or action the user wants to perform. It's a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill. In LUIS, an utterance as a whole is classified as an intent, but parts of the utterance are extracted as entities.
## Labeling examples Labeling, or marking, is the process of associating a positive or negative example with a model. ### Labeling for intents
-In LUIS, intents within an app are mutually exclusive. This means when you add an utterance to an intent, it is considered a _positive_ example for that intent and a _negative_ example for all other intents. Negative examples should not be confused with the "None" intent, which represents utterances that are outside the scope of the app.
+In LUIS, intents within an app are mutually exclusive. This means when you add an utterance to an intent, it is considered a _positive_ example for that intent and a _negative_ example for all other intents. Negative examples shouldn't be confused with the "None" intent, which represents utterances that are outside the scope of the app.
### Labeling for entities In LUIS, you [label](how-to/entities.md) a word or phrase in an intent's example utterance with an entity as a _positive_ example. Labeling shows the intent what it should predict for that utterance. The labeled utterances are used to train the intent.
You add values to your [list](#list-entity) entities. Each of those values can h
## Overfitting
-Overfitting happens when the model is fixated on the specific examples and is not able to generalize well.
+Overfitting happens when the model is fixated on the specific examples and isn't able to generalize well.
## Owner
A prebuilt domain is a LUIS app configured for a specific domain such as home au
### Prebuilt entity
-A prebuilt entity is an entity LUIS provides for common types of information such as number, URL, and email. These are created based on public data. You can choose to add a prebuilt entity as a stand-alone entity, or as a feature to an entity
+A prebuilt entity is an entity LUIS provides for common types of information such as number, URL, and email. These are created based on public data. You can choose to add a prebuilt entity as a stand-alone entity, or as a feature to an entity.
### Prebuilt intent
A prediction is a REST request to the Azure LUIS prediction service that takes i
The [prediction key](luis-how-to-azure-subscription.md) is the key associated with the LUIS service you created in Azure that authorizes your usage of the prediction endpoint.
-This key is not the authoring key. If you have a prediction endpoint key, it should be used for any endpoint requests instead of the authoring key. You can see your current prediction key inside the endpoint URL at the bottom of Azure resources page in LUIS website. It is the value of the subscription-key name/value pair.
+This key isn't the authoring key. If you have a prediction endpoint key, it should be used for any endpoint requests instead of the authoring key. You can see your current prediction key inside the endpoint URL at the bottom of Azure resources page in LUIS website. It is the value of the subscription-key name/value pair.
### Prediction resource
The prediction resource has an Azure "kind" of `LUIS`.
### Prediction score
-The [score](luis-concept-prediction-score.md) is a number from 0 and 1 that is a measure of how confident the system is that a particular input utterance matches a particular intent. A score closer to 1 means the system is very confident about its output and a score closer to 0 means the system is confident that the input does not match a particular output. Scores in the middle mean the system is very unsure of how to make the decision.
+The [score](luis-concept-prediction-score.md) is a number from 0 and 1 that is a measure of how confident the system is that a particular input utterance matches a particular intent. A score closer to 1 means the system is very confident about its output and a score closer to 0 means the system is confident that the input doesn't match a particular output. Scores in the middle mean the system is very unsure of how to make the decision.
For example, take a model that is used to identify if some customer text includes a food order. It might give a score of 1 for "I'd like to order one coffee" (the system is very confident that this is an order) and a score of 0 for "my team won the game last night" (the system is very confident that this is NOT an order). And it might have a score of 0.5 for "let's have some tea" (isn't sure if this is an order or not).
In LUIS [list entities](reference-entity-list.md), you can create a normalized v
|Nomalized value| Synonyms| |--|--|
-|Small| the little one, 8 ounce|
-|Medium| regular, 12 ounce|
-|Large| big, 16 ounce|
-|Xtra large| the biggest one, 24 ounce|
+|Small| the little one, 8 ounces|
+|Medium| regular, 12 ounces|
+|Large| big, 16 ounces|
+|Xtra large| the biggest one, 24 ounces|
-The model will return the normalized value for the entity when any of synonyms are seen in the input.
+The model returns the normalized value for the entity when any of synonyms are seen in the input.
## Test
The model will return the normalized value for the entity when any of synonyms a
## Timezone offset
-The endpoint includes [timezoneOffset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). This is the number in minutes you want to add or remove from the datetimeV2 prebuilt entity. For example, if the utterance is "what time is it now?", the datetimeV2 returned is the current time for the client request. If your client request is coming from a bot or other application that is not the same as your bot's user, you should pass in the offset between the bot and the user.
+The endpoint includes [timezoneOffset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). This is the number in minutes you want to add or remove from the datetimeV2 prebuilt entity. For example, if the utterance is "what time is it now?", the datetimeV2 returned is the current time for the client request. If your client request is coming from a bot or other application that isn't the same as your bot's user, you should pass in the offset between the bot and the user.
See [Change time zone of prebuilt datetimeV2 entity](luis-concept-data-alteration.md?#change-time-zone-of-prebuilt-datetimev2-entity).
For **English**, a token is a continuous span (no spaces or punctuation) of lett
|Phrase|Token count|Explanation| |--|--|--| |`Dog`|1|A single word with no punctuation or spaces.|
-|`RMT33W`|1|A record locator number. It may have numbers and letters, but does not have any punctuation.|
+|`RMT33W`|1|A record locator number. It might have numbers and letters, but doesn't have any punctuation.|
|`425-555-5555`|5|A phone number. Each punctuation mark is a single token so `425-555-5555` would be 5 tokens:<br>`425`<br>`-`<br>`555`<br>`-`<br>`5555` | |`https://luis.ai`|7|`https`<br>`:`<br>`/`<br>`/`<br>`luis`<br>`.`<br>`ai`<br>|
Training data is the set of information that is needed to train a model. This in
### Training errors
-Training errors are predictions on your training data that do not match their labels.
+Training errors are predictions on your training data that don't match their labels.
## Utterance
-An [utterance](concepts/utterances.md) is user input that is short text representative of a sentence in a conversation. It is a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime
+An [utterance](concepts/utterances.md) is user input that is short text representative of a sentence in a conversation. It's a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime.
## Version
ai-services Luis How To Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-azure-subscription.md
An authoring resource lets you create, manage, train, test, and publish your app
* 1 million authoring transactions * 1,000 testing prediction endpoint requests per month.
-You can use the [v3.0-preview LUIS Programmatic APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f) to manage authoring resources.
+You can use the [v3.0-preview LUIS Programmatic APIs](/rest/api/cognitiveservices-luis/authoring/apps?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) to manage authoring resources.
## Prediction resource
A prediction resource lets you query your prediction endpoint beyond the 1,000 r
* The free (F0) prediction resource, which gives you 10,000 prediction endpoint requests monthly. * Standard (S0) prediction resource, which is the paid tier.
-You can use the [v3.0-preview LUIS Endpoint API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5f68f4d40a511ce5a7440859) to manage prediction resources.
+You can use the [v3.0-preview LUIS Endpoint API](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true) to manage prediction resources.
> [!Note] > * You can also use a [multi-service resource](../multi-service-resource.md?pivots=azcli) to get a single endpoint you can use for multiple Azure AI services.
For automated processes like CI/CD pipelines, you can automate the assignment of
az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv ```
-1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c) that your user account has access to.
+1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](/rest/api/cognitiveservices-luis/authoring/azure-accounts/get-assigned?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) that your user account has access to.
This POST API requires the following values:
For automated processes like CI/CD pipelines, you can automate the assignment of
The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
-1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32228e8473de116325515) API.
+1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/assign-to-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API.
This POST API requires the following values:
When you unassign a resource, it's not deleted from Azure. It's only unlinked fr
az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv ```
-1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c), which your user account has access to.
+1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](/rest/api/cognitiveservices-luis/authoring/azure-accounts/get-assigned?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true), which your user account has access to.
This POST API requires the following values:
When you unassign a resource, it's not deleted from Azure. It's only unlinked fr
The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app.
-1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be32554f8591db3a86232e1/console) API.
+1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API.
This DELETE API requires the following values:
An app is defined by its Azure resources, which are determined by the owner's su
You can move your LUIS app. Use the following resources to help you do so by using the Azure portal or Azure CLI:
-* [Move an app between LUIS authoring resources](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/apps-move-app-to-another-luis-authoring-azure-resource)
* [Move a resource to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) * [Move a resource within the same subscription or across subscriptions](../../azure-resource-manager/management/move-limitations/app-service-move-limitations.md)
ai-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-collaborate.md
An app owner can add contributors to apps. These contributors can modify the mod
You have migrated if your LUIS authoring experience is tied to an Authoring resource on the **Manage -> Azure resources** page in the LUIS portal.
-In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type `LUIS.Authoring`. In the resource's **Access Control (IAM)** page, add the role of **contributor** for the user that you want to contribute. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type `LUIS.Authoring`. In the resource's **Access Control (IAM)** page, add the role of **contributor** for the user that you want to contribute. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## View the app as a contributor
ai-services Luis How To Manage Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-manage-versions.md
You can import a `.json` or a `.lu` version of your application.
See the following links to view the REST APIs for importing and exporting applications:
-* [Importing applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5892283039e2bb0d9c2805f5)
-* [Exporting applications](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c40)
+* [Importing applications](/rest/api/cognitiveservices-luis/authoring/versions/import?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+* [Exporting applications](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
ai-services Luis Reference Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-application-settings.md
Last updated 01/19/2024
[!INCLUDE [deprecation notice](./includes/deprecation-notice.md)]
-These settings are stored in the [exported](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) app and updated with the REST APIs or LUIS portal.
+These settings are stored in the [exported](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&tabs=HTTP&preserve-view=true) app and updated with the REST APIs or LUIS portal.
Changing your app version settings resets your app training status to untrained.
The following utterances show how diacritics normalization impacts utterances:
### Language support for diacritics
-#### Brazilian portuguese `pt-br` diacritics
+#### Brazilian Portuguese `pt-br` diacritics
|Diacritics set to false|Diacritics set to true| |-|-|
The following utterances show how diacritics normalization impacts utterances:
#### French `fr-` diacritics
-This includes both french and canadian subcultures.
+This includes both French and Canadian subcultures.
|Diacritics set to false|Diacritics set to true| |--|--|
This includes both french and canadian subcultures.
#### Spanish `es-` diacritics
-This includes both spanish and canadian mexican.
+This includes both Spanish and Canadian Mexican.
|Diacritics set to false|Diacritics set to true| |-|-|
ai-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-regions.md
[!INCLUDE [deprecation notice](./includes/deprecation-notice.md)]
-LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one predection key per region.
+LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one prediction key per region.
<a name="luis-website"></a>
Publishing regions are the regions where the application will be used in runtime
## Public apps
-A public app is published in all regions so that a user with a supported predection resource can access the app in all regions.
+A public app is published in all regions so that a user with a supported prediction resource can access the app in all regions.
<a name="publishing-regions"></a> ## Publishing regions are tied to authoring regions
-When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a resource in a publishing region.
+When you first create our LUIS application, you're required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you're required to create a resource in a publishing region.
Every authoring region has corresponding prediction regions that you can publish your application to, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region. ## Single data residency
-Single data residency means that the data does not leave the boundaries of the region.
+Single data residency means that the data doesn't leave the boundaries of the region.
> [!Note]
-> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
+> * Make sure to set `log=false` for [V3 APIs](/rest/api/cognitiveservices-luis/runtime/prediction/get-slot-prediction?view=rest-cognitiveservices-luis-runtime-v3.0&tabs=HTTP&preserve-view=true) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
> * If `log=true`, data is returned to the authoring region for active learning. ## Publishing to Europe
ai-services Luis Reference Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-response-codes.md
Title: API HTTP response codes - LUIS
-description: Understand what HTTP response codes are returned from the LUIS Authoring and Endpoint APIs
+description: Understand what HTTP response codes are returned from the LUIS Authoring and Endpoint APIs.
#
The following table lists some of the most common HTTP response status codes for
|401|Authoring|used endpoint key, instead of authoring key| |401|Authoring, Endpoint|invalid, malformed, or empty key| |401|Authoring, Endpoint| key doesn't match region|
-|401|Authoring|you are not the owner or collaborator|
+|401|Authoring|you aren't the owner or collaborator|
|401|Authoring|invalid order of API calls| |403|Authoring, Endpoint|total monthly key quota limit exceeded| |409|Endpoint|application is still loading|
The following table lists some of the most common HTTP response status codes for
## Next steps
-* REST API [authoring](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) and [endpoint](https://westus.dev.cognitive.microsoft.com/docs/services/5819c76f40a6350ce09de1ac/operations/5819c77140a63516d81aee78) documentation
+* REST API [authoring](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) and [endpoint](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true) documentation
ai-services Luis Tutorial Node Import Utterances Csv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-node-import-utterances-csv.md
Title: Import utterances using Node.js - LUIS
-description: Learn how to build a LUIS app programmatically from preexisting data in CSV format using the LUIS Authoring API.
+description: Learn how to build a LUIS app programmatically from pre-existing data in CSV format using the LUIS Authoring API.
#
LUIS provides a programmatic API that does everything that the [LUIS](luis-refer
* Sign in to the [LUIS](luis-reference-regions.md) website and find your [authoring key](luis-how-to-azure-subscription.md) in Account Settings. You use this key to call the Authoring APIs. * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. * This article starts with a CSV for a hypothetical company's log files of user requests. Download it [here](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv).
-* Install the latest Node.js with NPM. Download it from [here](https://nodejs.org/en/download/).
+* Install the latest Node.js version. Download it from [here](https://nodejs.org/en/download/).
* **[Recommended]** Visual Studio Code for IntelliSense and debugging, download it from [here](https://code.visualstudio.com/) for free. All of the code in this article is available on the [Azure-Samples Language Understanding GitHub repository](https://github.com/Azure-Samples/cognitive-services-language-understanding/tree/master/examples/build-app-programmatically-csv).
-## Map preexisting data to intents and entities
+## Map pre-existing data to intents and entities
Even if you have a system that wasn't created with LUIS in mind, if it contains textual data that maps to different things users want to do, you might be able to come up with a mapping from the existing categories of user input to intents in LUIS. If you can identify important words or phrases in what the users said, these words might map to entities. Open the [`IoT.csv`](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/IoT.csv) file. It contains a log of user queries to a hypothetical home automation service, including how they were categorized, what the user said, and some columns with useful information pulled out of them.
The following code adds the entities to the LUIS app. Copy or [download](https:/
## Add utterances
-Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`.
+Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](/rest/api/cognitiveservices-luis/authoring/examples/batch?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`.
[!code-javascript[Node.js code for adding utterances](~/samples-luis/examples/build-app-programmatically-csv/_upload.js)]
Once the entities and intents have been defined in the LUIS app, you can add the
### Install Node.js dependencies
-Install the Node.js dependencies from NPM in the terminal/command line.
+Install the Node.js dependencies in the terminal/command line.
```console > npm install
Run the script from a terminal/command line with Node.js.
> node index.js ```
-or
+Or
```console > npm start
Once the script completes, you can sign in to [LUIS](luis-reference-regions.md)
## Next steps
-[Test and train your app in LUIS website](how-to/train-test.md)
+[Test and train your app in LUIS website](how-to/train-test.md).
## Additional resources This sample application uses the following LUIS APIs:-- [create app](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36)-- [add intents](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0c)-- [add entities](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0e)-- [add utterances](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09)
+- [create app](/rest/api/cognitiveservices-luis/authoring/apps/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+- [add intents](/rest/api/cognitiveservices-luis/authoring/features/add-intent-feature?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+- [add entities](/rest/api/cognitiveservices-luis/authoring/features/add-entity-feature?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+- [add utterances](/rest/api/cognitiveservices-luis/authoring/examples/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
ai-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md
Last updated 01/19/2024
Delete customer data to ensure privacy and compliance. ## Summary of customer data request featuresΓÇï
-Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f).
+Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
LUIS users have full control to delete any user content, either through the LUIS
| | **User Account** | **Application** | **Example Utterance(s)** | **End-user queries** | | | | | | | | **Portal** | [Link](luis-concept-data-storage.md#delete-an-account) | [Link](how-to/sign-in.md) | [Link](luis-concept-data-storage.md#utterances-in-an-intent) | [Active learning utterances](how-to/improve-application.md)<br>[Logged Utterances](luis-concept-data-storage.md#disable-logging-utterances) |
-| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c4c) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c39) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0b) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/58b6f32139e2bb139ce823c9) |
+| **APIs** | [Link](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/apps/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/examples/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/versions/delete-unlabelled-utterance?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) |
## Exporting customer data
LUIS users have full control to view the data on the portal, however it must be
| | **User Account** | **Application** | **Utterance(s)** | **End-user queries** | | | | | | |
-| **APIs** | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c48) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c0a) | [Link](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c36) |
+| **APIs** | [Link](/rest/api/cognitiveservices-luis/authoring/azure-accounts/list-user-luis-accounts?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v2.0&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/examples/list?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/apps/download-query-logs?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) |
## Location of active learning
ai-services Reference Pattern Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-pattern-syntax.md
The words of the book title are not confusing to LUIS because LUIS knows where t
## Explicit lists
-create an [Explicit List](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8) through the authoring API to allow the exception when:
+create an [Explicit List](/rest/api/cognitiveservices-luis/authoring/model/add-explicit-list-item?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) through the authoring API to allow the exception when:
* Your pattern contains a [Pattern.any](concepts/entities.md#patternany-entity) * And that pattern syntax allows for the possibility of an incorrect entity extraction based on the utterance.
In the following utterances, the **subject** and **person** entity are extracted
In the preceding table, the subject should be `the man from La Mancha` (a book title) but because the subject includes the optional word `from`, the title is incorrectly predicted.
-To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5ade550bd5b81c209ce2e5a8).
+To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](/rest/api/cognitiveservices-luis/authoring/model/add-explicit-list-item?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true).
## Syntax to mark optional text in a template utterance
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
Azure RBAC can be assigned to a Language Understanding Authoring resource. To gr
1. On the **Members** tab, select a user, group, service principal, or managed identity. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## LUIS role types
A user that should only be validating and reviewing LUIS applications, typically
:::column-end::: :::column span=""::: All GET APIs under:
- * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
- * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f)
+ * [LUIS Programmatic v3.0-preview](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true)
+ * [LUIS Programmatic v2.0 APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v2.0&preserve-view=true)
All the APIs under: * LUIS Endpoint APIs v2.0
- * [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8)
- * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8)
-
+ * [LUIS Endpoint APIs v3.0](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true)
All the Batch Testing Web APIs :::column-end::: :::row-end:::
A user that is responsible for building and modifying LUIS application, as a col
All POST, PUT and DELETE APIs under:
- * [LUIS Programmatic v3.0-preview](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c2f)
- * [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2d)
+ * [LUIS Programmatic v3.0-preview](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true)
+ * [LUIS Programmatic v2.0 APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v2.0&preserve-view=true)
Except for
- * [Delete application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c39)
- * [Move app to another LUIS authoring Azure resource](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/apps-move-app-to-another-luis-authoring-azure-resource)
- * [Publish an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5890b47c39e2bb052c5b9c3b)
- * [Update application settings](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/58aeface39e2bb03dcd5909e)
- * [Assign a LUIS azure accounts to an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32228e8473de116325515)
- * [Remove an assigned LUIS azure accounts from an application](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5be32554f8591db3a86232e1)
+ * [Delete application](/rest/api/cognitiveservices-luis/authoring/apps/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+ * Move app to another LUIS authoring Azure resource
+ * [Publish an application](/rest/api/cognitiveservices-luis/authoring/apps/publish?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+ * [Update application settings](/rest/api/cognitiveservices-luis/authoring/apps/update-settings?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+ * [Assign a LUIS azure accounts to an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/assign-to-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
+ * [Remove an assigned LUIS azure accounts from an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)
:::column-end::: :::row-end:::
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 03/25/2024 Last updated : 04/05/2024
Virtual networks are supported in [regions where Azure AI services are available
> - `CognitiveServicesManagement` > - `CognitiveServicesFrontEnd` > - `Storage` (Speech Studio only)
+>
+> For information on configuring Azure AI Studio, see the [Azure AI Studio documentation](../ai-studio/how-to/configure-private-link.md).
## Change the default network access rule
Currently, only IPv4 addresses are supported. Each Azure AI services resource su
To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help.
-If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md).
+If you use Azure ExpressRoute on-premises for Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md).
-For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting.
-
-To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering).
+For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting.
### Managing IP network rules
ai-services Liveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/liveness.md
The liveness detection solution successfully defends against various spoof types
- Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. - You need the key and endpoint from the resource you create to connect your application to the Face service. You'll paste your key and endpoint into the code later in the quickstart. - You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.-- Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android). To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
+- Access to the Azure AI Vision Face Client SDK for mobile (IOS and Android) and web. To get started, you need to apply for the [Face Recognition Limited Access features](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to get access to the SDK. For more information, see the [Face Limited Access](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext) page.
## Perform liveness detection
-The liveness solution integration involves two different components: a mobile application and an app server/orchestrator.
+The liveness solution integration involves two different components: a frontend mobile/web application and an app server/orchestrator.
### Integrate liveness into mobile application
-Once you have access to the SDK, follow instruction in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports both Java/Kotlin for Android and Swift for iOS mobile applications:
+Once you have access to the SDK, follow instruction in the [azure-ai-vision-sdk](https://github.com/Azure-Samples/azure-ai-vision-sdk) GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications:
- For Swift iOS, follow the instructions in the [iOS sample](https://aka.ms/azure-ai-vision-face-liveness-client-sdk-ios-readme) - For Kotlin/Java Android, follow the instructions in the [Android sample](https://aka.ms/liveness-sample-java)
+- For JavaScript Web, follow the instructions in the [Web sample](https://aka.ms/liveness-sample-web)
Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user to adjust their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload.
The high-level steps involved in liveness orchestration are illustrated below:
:::image type="content" source="../media/liveness/liveness-diagram.jpg" alt-text="Diagram of the liveness workflow in Azure AI Face." lightbox="../media/liveness/liveness-diagram.jpg":::
-1. The mobile application starts the liveness check and notifies the app server.
+1. The frontend application starts the liveness check and notifies the app server.
-1. The app server creates a new liveness session with Azure AI Face Service. The service creates a liveness-session and responds back with a session-authorization-token.
+1. The app server creates a new liveness session with Azure AI Face Service. The service creates a liveness-session and responds back with a session-authorization-token. More information regarding each request parameter involved in creating a liveness session is referenced in [Liveness Create Session Operation](https://aka.ms/face-api-reference-createlivenesssession).
```json Request:
The high-level steps involved in liveness orchestration are illustrated below:
} ```
-1. The app server provides the session-authorization-token back to the mobile application.
+1. The app server provides the session-authorization-token back to the frontend application.
-1. The mobile application provides the session-authorization-token during the Azure AI Vision SDKΓÇÖs initialization.
+1. The frontend application provides the session-authorization-token during the Azure AI Vision SDKΓÇÖs initialization.
```kotlin mServiceOptions?.setTokenCredential(com.azure.android.core.credential.TokenCredential { _, callback ->
The high-level steps involved in liveness orchestration are illustrated below:
serviceOptions?.authorizationToken = "<INSERT_TOKEN_HERE>" ```
+ ```javascript
+ azureAIVisionFaceAnalyzer.token = "<INSERT_TOKEN_HERE>"
+ ```
+ 1. The SDK then starts the camera, guides the user to position correctly and then prepares the payload to call the liveness detection service endpoint. 1. The SDK calls the Azure AI Vision Face service to perform the liveness detection. Once the service responds, the SDK notifies the mobile application that the liveness check has been completed.
-1. The mobile application relays the liveness check completion to the app server.
+1. The frontend application relays the liveness check completion to the app server.
1. The app server can now query for the liveness detection result from the Azure AI Vision Face service.
The high-level steps involved in liveness orchestration are illustrated below:
"width": 409, "height": 395 },
- "fileName": "video.webp",
+ "fileName": "content.bin",
"timeOffsetWithinFile": 0, "imageType": "Color" },
Use the following tips to ensure that your input images give the most accurate r
The high-level steps involved in liveness with verification orchestration are illustrated below: 1. Provide the verification reference image by either of the following two methods:
- - The app server provides the reference image when creating the liveness session.
+ - The app server provides the reference image when creating the liveness session. More information regarding each request parameter involved in creating a liveness session with verification is referenced in [Liveness With Verify Create Session Operation](https://aka.ms/face-api-reference-createlivenesswithverifysession).
```json Request:
The high-level steps involved in liveness with verification orchestration are il
```
- - The mobile application provides the reference image when initializing the SDK.
+ - The mobile application provides the reference image when initializing the SDK. This is not a supported scenario in the web solution.
```kotlin val singleFaceImageSource = VisionSource.fromFile("/path/to/image.jpg")
The high-level steps involved in liveness with verification orchestration are il
--header 'Content-Type: multipart/form-data' \ --header 'apim-recognition-model-preview-1904: true' \ --header 'Authorization: Bearer.<session-authorization-token> \
- --form 'Content=@"video.webp"' \
+ --form 'Content=@"content.bin"' \
--form 'Metadata="<insert-metadata>" Response:
The high-level steps involved in liveness with verification orchestration are il
"width": 409, "height": 395 },
- "fileName": "video.webp",
+ "fileName": "content.bin",
"timeOffsetWithinFile": 0, "imageType": "Color" },
See the Azure AI Vision SDK reference to learn about other options in the livene
- [Kotlin (Android)](https://aka.ms/liveness-sample-java) - [Swift (iOS)](https://aka.ms/azure-ai-vision-face-liveness-client-sdk-ios-readme)
+- [JavaScript (Web)](https://aka.ms/azure-ai-vision-face-liveness-client-sdk-web-readme)
See the Session REST API reference to learn more about the features available to orchestrate the liveness solution. -- [Liveness Session APIs](https://westus.dev.cognitive.microsoft.com/docs/services/609a5e53f0971cb3/operations/session-create-detectliveness-singlemodal)-- [Liveness-With-Verify Session APIs](https://westus.dev.cognitive.microsoft.com/docs/services/609a5e53f0971cb3/operations/session-create-detectlivenesswithverify-singlemodal)
+- [Liveness Session Operations](/rest/api/face/liveness-session-operations)
ai-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md
In this article, you learned concepts and workflow for downloading, installing,
* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings * Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container.
+* Refer to the [Read API](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2-preview) for details about the methods supported by the container.
* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality. * Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Concept Background Removal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-background-removal.md
It's important to note the limitations of background removal:
## Use the API
-The background removal feature is available through the [Segment](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-02-01-preview/operations/63e6b6d9217d201194bbecbd) API (`imageanalysis:segment`). See the [Background removal how-to guide](./how-to/background-removal.md) for more information.
+The background removal feature is available through the [Segment](/rest/api/computervision/image-analysis/segment?view=rest-computervision-2023-02-01-preview&tabs=HTTP) API (`imageanalysis:segment`). See the [Background removal how-to guide](./how-to/background-removal.md) for more information.
## Next steps
ai-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-brand-detection.md
Brand detection is a specialized mode of [object detection](concept-object-detec
The Azure AI Vision service detects whether there are brand logos in a given image; if there are, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
-The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Azure AI Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
+The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for isn't detected by the Azure AI Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
## Brand detection example
The following JSON responses illustrate what Azure AI Vision returns when detect
] ```
-In some cases, the brand detector will pick up both the logo image and the stylized brand name as two separate logos.
+In some cases, the brand detector picks up both the logo image and the stylized brand name as two separate logos.
![A gray sweatshirt with a Microsoft label and logo on it](./Images/gray-shirt-logo.jpg)
In some cases, the brand detector will pick up both the logo image and the styli
## Use the API
-The brand detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"brands"` section.
+The brand detection feature is part of the [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Brands` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"brands"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Categorizing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-categorizing-images.md
The following table illustrates a typical image set and the category returned by
## Use the API
-The categorization feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
+The categorization feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Categories` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"categories"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-describing-images.md
Previously updated : 07/04/2023 Last updated : 04/30/2024
The following JSON response illustrates what the Analyze API returns when descri
## Use the API
-The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
+The image description feature is part of the [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Adult Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-adult-content.md
The "adult" classification contains several different categories:
## Use the API
-You can detect adult content with the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
+You can detect adult content with the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties&mdash;`isAdultContent`, `isRacyContent`, and `isGoryContent`&mdash;in its JSON response. The method also returns corresponding properties&mdash;`adultScore`, `racyScore`, and `goreScore`&mdash;which represent confidence scores between zero and one for each respective category.
- [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Color Schemes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-color-schemes.md
The following table shows Azure AI Vision's black and white evaluation in the sa
## Use the API
-The color scheme detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
+The color scheme detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Domain Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-domain-content.md
# Domain-specific content detection
-In addition to tagging and high-level categorization, Azure AI Vision also supports further domain-specific analysis using models that have been trained on specialized data.
+In addition to tagging and high-level categorization, Azure AI Vision also supports further domain-specific analysis using models that are trained on specialized data.
There are two ways to use the domain-specific models: by themselves (scoped analysis) or as an enhancement to the image [categorization](./concept-categorizing-images.md) feature. ### Scoped analysis
-You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API.
+You can analyze an image using only the chosen domain-specific model by calling the [Models/\<model\>/Analyze](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API.
-The following is a sample JSON response returned by the **models/celebrities/analyze** API for the given image:
+The following is a sample JSON response returned by the `models/celebrities/analyze` API for the given image:
![Satya Nadella standing, smiling](./images/satya.jpeg)
The following is a sample JSON response returned by the **models/celebrities/ana
### Enhanced categorization analysis
-You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API call.
+You can also use domain-specific models to supplement general image analysis. You do this as part of [high-level categorization](concept-categorizing-images.md) by specifying domain-specific models in the *details* parameter of the [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API call.
In this case, the 86-category taxonomy classifier is called first. If any of the detected categories have a matching domain-specific model, the image is passed through that model as well and the results are added.
Currently, Azure AI Vision supports the following domain-specific models:
| celebrities | Celebrity recognition, supported for images classified in the `people_` category | | landmarks | Landmark recognition, supported for images classified in the `outdoor_` or `building_` categories |
-Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20e) API will return this information along with the categories to which each model can apply:
+Calling the [Models](/rest/api/computervision/list-models/list-models?view=rest-computervision-v3.2&tabs=HTTP) API returns this information along with the categories to which each model can apply:
```json {
Calling the [Models](https://westcentralus.dev.cognitive.microsoft.com/docs/serv
## Use the API
-This feature is available through the [Analyze Image 3.2 API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). You can call this API through a native SDK or through REST calls. Include `Celebrities` or `Landmarks` in the **details** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"details"` section.
+This feature is available through the [Analyze Image 3.2 API](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP). You can call this API through a native SDK or through REST calls. Include `Celebrities` or `Landmarks` in the **details** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"details"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-faces.md
The next example demonstrates the JSON response returned for an image containing
## Use the API
-The face detection feature is part of the [Analyze Image 3.2](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
+The face detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Detecting Image Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-detecting-image-types.md
# Image type detection
-With the [Analyze Image 3.2 API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b), Azure AI Vision can analyze the content type of images and indicate whether an image is clip art or a line drawing.
+With the [Analyze Image 3.2 API](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP), Azure AI Vision can analyze the content type of images and indicate whether an image is clip art or a line drawing.
## Clip art detection
The following JSON responses illustrates what Azure AI Vision returns when indic
## Use the API
-The image type detection feature is part of the [Analyze Image 3.2 API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
+The image type detection feature is part of the [Analyze Image 3.2 API](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP). You can call this API through a native SDK or through REST calls. Include `ImageType` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"imageType"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md
- ignite-2023 Previously updated : 07/04/2023 Last updated : 04/30/2024
This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data.
-You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
+You use the [Detect] API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
## Face rectangle
Try out the capabilities of face detection quickly and easily using Vision Studi
## Face ID
-The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
+The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Detect] API call.
## Face landmarks
The Detection_03 model currently has the most accurate landmark detection. The e
[!INCLUDE [Sensitive attributes notice](./includes/identity-sensitive-attributes.md)]
-Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
+Attributes are a set of features that can optionally be detected by the [Detect] API. The following attributes can be detected:
* **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory. * **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
If you're detecting faces from a video feed, you may be able to improve performa
Now that you're familiar with face detection concepts, learn how to write a script that detects faces in a given image. * [Call the detect API](./how-to/identity-detect-faces.md)+
+[Detect]: /rest/api/face/face-detection-operations/detect
ai-services Concept Face Recognition Data Structures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-recognition-data-structures.md
This article explains the data structures used in the Face service for face recognition operations. These data structures hold data on faces and persons.
-You can try out the capabilities of face recognition quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
- [!INCLUDE [Gate notice](./includes/identity-gate-notice.md)] ## Data structures used with Identify
ai-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-recognition.md
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
-You can try out the capabilities of face recognition quickly and easily using Vision Studio.
-> [!div class="nextstepaction"]
-> [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
- ## Face recognition operations
You can try out the capabilities of face recognition quickly and easily using Vi
### PersonGroup creation and training
-You need to create a [PersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) or [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) to store the set of people to match against. PersonGroups hold [Person](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) objects, which each represent an individual person and hold a set of face data belonging to that person.
+You need to create a [PersonGroup](/rest/api/face/person-group-operations/create-person-group) or [LargePersonGroup](/rest/api/face/person-group-operations/create-large-person-group) to store the set of people to match against. PersonGroups hold [Person](/rest/api/face/person-group-operations/create-person-group-person) objects, which each represent an individual person and hold a set of face data belonging to that person.
-The [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249) operation prepares the data set to be used in face data comparisons.
+The [Train](/rest/api/face/person-group-operations/train-person-group) operation prepares the data set to be used in face data comparisons.
### Identification
-The [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
+The [Identify](/rest/api/face/face-recognition-operations/identify-from-large-person-group) operation takes one or several source face IDs (from a DetectedFace or PersistedFace object) and a PersonGroup or LargePersonGroup. It returns a list of the Person objects that each source face might belong to. Returned Person objects are wrapped as Candidate objects, which have a prediction confidence value.
### Verification
-The [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
+The [Verify](/rest/api/face/face-recognition-operations/verify-face-to-face) operation takes a single face ID (from a DetectedFace or PersistedFace object) and a Person object. It determines whether the face belongs to that same person. Verification is one-to-one matching and can be used as a final check on the results from the Identify API call. However, you can optionally pass in the PersonGroup to which the candidate Person belongs to improve the API performance.
## Related data structures
ai-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-generating-thumbnails.md
The following table illustrates thumbnails defined by smart-cropping for the exa
## Use the API
-The generate thumbnail feature is available through the [Get Thumbnail](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20c) and [Get Area of Interest](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/b156d0f5e11e492d9f64418d) API. You can call this API through a native SDK or through REST calls.
+The generate thumbnail feature is available through the [Get Thumbnail](/rest/api/computervision/generate-thumbnail/generate-thumbnail?view=rest-computervision-v3.2&tabs=HTTP) and [Get Area of Interest](/rest/api/computervision/get-area-of-interest/get-area-of-interest?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls.
* [Generate a thumbnail (how-to)](./how-to/generate-thumbnail.md)
ai-services Concept Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-object-detection.md
# Object detection
-Object detection is similar to [tagging](concept-tag-images-40.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat, and person, the object detection operation will list those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
+Object detection is similar to [tagging](concept-tag-images-40.md), but the API returns the bounding box coordinates (in pixels) for each object found in the image. For example, if an image contains a dog, cat, and person, the object detection operation lists those objects with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same object in an image.
-The object detection function applies tags based on the objects or living things identified in the image. There is no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor", which can't be localized with bounding boxes.
+The object detection function applies tags based on the objects or living things identified in the image. There's no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the object detection function only finds objects and living things, while the tag function can also include contextual terms like "indoor," which can't be localized with bounding boxes.
Try out the capabilities of object detection quickly and easily in your browser using Vision Studio.
Try out the capabilities of object detection quickly and easily in your browser
## Object detection example
-The following JSON response illustrates what the Analyze API returns when detecting objects in the example image.
+The following JSON response illustrates what the Analyze Image API returns when detecting objects in the example image.
![A woman using a Microsoft Surface device in a kitchen](./Images/windows-kitchen.jpg)
The following JSON response illustrates what the Analyze API returns when detect
It's important to note the limitations of object detection so you can avoid or mitigate the effects of false negatives (missed objects) and limited detail.
-* Objects are generally not detected if they're small (less than 5% of the image).
-* Objects are generally not detected if they're arranged closely together (a stack of plates, for example).
-* Objects are not differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
+* Objects are usually not detected if they're small (less than 5% of the image).
+* Objects are usually not detected if they're arranged closely together (a stack of plates, for example).
+* Objects aren't differentiated by brand or product names (different types of sodas on a store shelf, for example). However, you can get brand information from an image by using the [Brand detection](concept-brand-detection.md) feature.
## Use the API
-The object detection feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
+The object detection feature is part of the [Analyze Image](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls. Include `Objects` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"objects"` section.
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-ocr.md
Previously updated : 07/04/2023 Last updated : 04/30/2024
ai-services Concept Shelf Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-shelf-analysis.md
It returns a JSON response that accounts for each position in the planogram docu
Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API. * [Prepare images for Product Recognition](./how-to/shelf-modify-images.md) * [Analyze a shelf image](./how-to/shelf-analyze.md)
-* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
+* [API reference](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview)
ai-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-tagging-images.md
The following JSON response illustrates what Azure AI Vision returns when taggin
## Use the API
-The tagging feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
+The tagging feature is part of the [Analyze Image](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls. Include `Tags` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"tags"` section.
* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
ai-services Call Analyze Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image.md
This section shows you how to parse the results of the API call. It includes the
> [!NOTE] > **Scoped API calls** >
-> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) for other features that can be called separately.
+> Some of the features in Image Analysis can be called directly as well as through the Analyze API call. For example, you can do a scoped analysis of only image tags by making a request to `<endpoint>/vision/v3.2/tag` (or to the corresponding method in the SDK). See the [reference documentation](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2) for other features that can be called separately.
#### [REST](#tab/rest)
ai-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-read-api.md
When using the Read operation, use the following values for the optional `model-
### Input language
-By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
+By default, the service extracts all text from your images or documents including mixed languages. The [Read operation](/rest/api/computervision/read/read?view=rest-computervision-v3.2-preview&tabs=HTTP) has an optional request parameter for language. Only provide a language code if you want to force the document to be processed as that specific language. Otherwise, the service may return incomplete and incorrect text.
### Natural reading order output (Latin languages only)
By default, the service extracts text from all pages in the documents. Optionall
You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
-The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously.
+The Read API's [Read call](/rest/api/computervision/read/read?view=rest-computervision-v3.2-preview&tabs=HTTP) takes an image or PDF document as the input and extracts text asynchronously.
`https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
The call returns with a response header field called `Operation-Location`. The `
## Get results from the service
-The second step is to call [Get Read Results](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d9869604be85dee480c8750) operation. This operation takes as input the operation ID that was created by the Read operation.
+The second step is to call [Get Read Result](/rest/api/computervision/get-read-result/get-read-result?view=rest-computervision-v3.2-preview&tabs=HTTP) operation. This operation takes as input the operation ID that was created by the Read operation.
`https://{endpoint}/vision/v3.2/read/analyzeResults/{operationId}`
The response includes a classification of whether each line of text is in handwr
## Next steps - Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).-- [Read 3.2 REST API reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
+- [Read 3.2 REST API reference](/rest/api/computervision/read/read?view=rest-computervision-v3.2-preview&tabs=HTTP).
ai-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
+The [Find Similar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
This guide demonstrates how to use the Find Similar feature in the different language SDKs. The following sample code assumes you have already authenticated a Face client object. For details on how to do this, follow a [quickstart](../quickstarts-sdk/identity-client-library.md).
ai-services Identity Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-access-token.md
Independent software vendors (ISVs) can manage the Face API usage of their clien
This guide shows you how to generate the access tokens, if you're an approved ISV, and how to use the tokens if you're a client.
-The limited access token feature is a part of the existing [Azure AI services token service](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueScopedToken). We have added a new operation for the purpose of bypassing the Limited Access gate for approved scenarios. Only ISVs that pass the gating requirements will be given access to this feature.
+The limited access token feature is a part of the existing Azure AI Services token service. We have added a new operation for the purpose of bypassing the Limited Access gate for approved scenarios. Only ISVs that pass the gating requirements will be given access to this feature.
## Example use case
If the ISV learns that a client is using the LimitedAccessToken for non-approved
## Prerequisites
-* [cURL](https://curl.haxx.se/) installed (or another tool that can make HTTP requests).
-* The ISV needs to have either an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or an [Azure AI services multi-service](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
-* The client needs to have an [Azure AI Face](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
+* [cURL](https://curl.se/) installed (or another tool that can make HTTP requests).
+* The ISV needs to have either an [Azure AI Face](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource or an [Azure AI services multi-service](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllInOne) resource.
+* The client needs to have an [Azure AI Face](https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/Face) resource.
## Step 1: ISV obtains client's Face resource ID
static void Main(string[] args)
```
-## Next steps
-* [LimitedAccessToken API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57346a70b4769d2694911369/operations/issueLimitedAccessToken)
+
ai-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md
This guide demonstrates how to use the face detection API to extract attributes from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
-The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
+The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the [REST API](/rest/api/face/face-detection-operations/detect).
## Setup
In this guide, you learned how to use the various functionalities of face detect
## Related articles -- [Reference documentation (REST)](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236)
+- [Reference documentation (REST)](/rest/api/face/operation-groups)
- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
The Multimodal embeddings APIs enable the _vectorization_ of images and text que
The `2024-02-01` API includes a multi-lingual model that supports text search in 102 languages. The original English-only model is still available, but it cannot be combined with the new model in the same search index. If you vectorized text and images using the English-only model, these vectors wonΓÇÖt be compatible with multi-lingual text and image vectors. > [!IMPORTANT]
-> These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+> These APIs are only available in the following geographic regions: SwedenCentral, EastUS, NorthEurope, WestEurope,WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast.
## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
+* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: SwedenCentral, EastUS, NorthEurope, WestEurope,WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast.
* After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on. ## Try out Multimodal embeddings
ai-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md
In this guide, you learned how to make a basic analysis call using the pretraine
> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md) * [Image Analysis overview](../overview-image-analysis.md)
-* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
+* [API reference](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview)
ai-services Shelf Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-model-customization.md
In this guide, you learned how to use a custom Product Recognition model to bett
> [Planogram matching](shelf-planogram.md) * [Image Analysis overview](../overview-image-analysis.md)
-* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
+* [API reference](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview)
ai-services Shelf Planogram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-planogram.md
Paired planogram position ID and corresponding detected object from product unde
## Next steps * [Image Analysis overview](../overview-image-analysis.md)
-* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0a)
+* [API reference](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview)
ai-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md
The different face detection models are optimized for different tasks. See the f
|||-|-|--| |**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. | |**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Does not return face attributes. | Does not return face landmarks. |
-|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
+|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask, blur, and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
-The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
+The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
## Detect faces with specified model Face detection finds the bounding-box locations of human faces and identifies their visual landmarks. It extracts the face's features and stores them for later use in [recognition](../concept-face-recognition.md) operations.
-When you use the [Face - Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
+When you use the [Detect] API, you can assign the model version with the `detectionModel` parameter. The available values are:
* `detection_01` * `detection_02` * `detection_03`
-A request URL for the [Face - Detect] REST API will look like this:
+A request URL for the [Detect] REST API will look like this:
`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>` If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library. ```csharp
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03"); ``` ## Add face to Person with specified model
-The Face service can extract face data from an image and associate it with a **Person** object through the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. In this API call, you can specify the detection model in the same way as in [Face - Detect].
+The Face service can extract face data from an image and associate it with a **Person** object through the [Add Person Group Person Face] API. In this API call, you can specify the detection model in the same way as in [Detect].
See the following code example for the .NET client library.
await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name",
string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03"); ``` This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`. > [!NOTE]
-> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Face - Identify] API, for example).
+> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Identify From Person Group] API, for example).
## Add face to FaceList with specified model
You can also specify a detection model when you add a face to an existing **Face
```csharp await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
-string imageUrl = "https://news.microsoft.com/ceo/assets/photos/06_web.jpg";
+string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03"); ```
In this article, you learned how to specify the detection model to use with diff
* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
-[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
-[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
-[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
-[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
-[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
-[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
-[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
-[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
-[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
-[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
-[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
-[FaceList - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250
-[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
+[Detect]: /rest/api/face/face-detection-operations/detect
+[Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group
+[Add Person Group Person Face]: /rest/api/face/person-group-operations/add-person-group-person-face
ai-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-recognition-model.md
Face detection identifies the visual landmarks of human faces and finds their bo
The recognition model is used when the face features are extracted, so you can specify a model version when performing the Detect operation.
-When using the [Face - Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
+When using the [Detect] API, assign the model version with the `recognitionModel` parameter. The available values are:
* `recognition_01` * `recognition_02` * `recognition_03` * `recognition_04`
-Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Face - Detect] REST API will look like this:
+Optionally, you can specify the _returnRecognitionModel_ parameter (default **false**) to indicate whether _recognitionModel_ should be returned in response. So, a request URL for the [Detect] REST API will look like this:
`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel]&subscription-key=<Subscription key>`
var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId
## Identify faces with the specified model
-The Face service can extract face data from an image and associate it with a **Person** object (through the [Add face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Face - Identify] call), and the matching person within that group can be identified.
+The Face service can extract face data from an image and associate it with a **Person** object (through the [Add Person Group Person Face] API call, for example), and multiple **Person** objects can be stored together in a **PersonGroup**. Then, a new face can be compared against a **PersonGroup** (with the [Identify From Person Group] call), and the matching person within that group can be identified.
-A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([PersonGroup - Create] or [LargePersonGroup - Create]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [PersonGroup - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+A **PersonGroup** should have one unique recognition model for all of the **Person**s, and you can specify this using the `recognitionModel` parameter when you create the group ([Create Person Group] or [Create Large Person Group]). If you don't specify this parameter, the original `recognition_01` model is used. A group will always use the recognition model it was created with, and new faces will become associated with this model when they're added to it. This can't be changed after a group's creation. To see what model a **PersonGroup** is configured with, use the [Get Person Group] API with the _returnRecognitionModel_ parameter set as **true**.
See the following code example for the .NET client library.
await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name",
In this code, a **PersonGroup** with ID `mypersongroupid` is created, and it's set up to use the _recognition_04_ model to extract face features.
-Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Face - Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
+Correspondingly, you need to specify which model to use when detecting faces to compare against this **PersonGroup** (through the [Detect] API). The model you use should always be consistent with the **PersonGroup**'s configuration; otherwise, the operation will fail due to incompatible models.
-There is no change in the [Face - Identify] API; you only need to specify the model version in detection.
+There is no change in the [Identify From Person Group] API; you only need to specify the model version in detection.
## Find similar faces with the specified model
-You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [Create Face List] API or [Create Large Face List]. If you don't specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they're added to the list; you can't change this after creation. To see what model a **FaceList** is configured with, use the [Get Face List] API with the _returnRecognitionModel_ parameter set as **true**.
See the following code example for the .NET client library.
See the following code example for the .NET client library.
await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04"); ```
-This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
+This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
-There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
+There is no change in the [Find Similar] API; you only specify the model version in detection.
## Verify faces with the specified model
-The [Face - Verify] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
+The [Verify Face To Face] API checks whether two faces belong to the same person. There is no change in the Verify API with regard to recognition models, but you can only compare faces that were detected with the same model.
## Evaluate different models If you'd like to compare the performances of different recognition models on your own data, you'll need to: 1. Create four **PersonGroup**s using _recognition_01_, _recognition_02_, _recognition_03_, and _recognition_04_ respectively. 1. Use your image data to detect faces and register them to **Person**s within these four **PersonGroup**s.
-1. Train your **PersonGroup**s using the PersonGroup - Train API.
-1. Test with Face - Identify on all four **PersonGroup**s and compare the results.
+1. Train your **PersonGroup**s using the [Train Person Group] API.
+1. Test with [Identify From Person Group] on all four **PersonGroup**s and compare the results.
If you normally specify a confidence threshold (a value between zero and one that determines how confident the model must be to identify a face), you may need to use different thresholds for different models. A threshold for one model isn't meant to be shared to another and won't necessarily produce the same results.
In this article, you learned how to specify the recognition model to use with di
* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
-[Face - Detect]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d
-[Face - Find Similar]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237
-[Face - Identify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239
-[Face - Verify]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
-[PersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244
-[PersonGroup - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395246
-[PersonGroup Person - Add Face]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b
-[PersonGroup - Train]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395249
-[LargePersonGroup - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d
-[FaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b
-[FaceList - Get]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524c
-[LargeFaceList - Create]: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc
+[Detect]: /rest/api/face/face-detection-operations/detect
+[Verify Face To Face]: /rest/api/face/face-recognition-operations/verify-face-to-face
+[Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group
+[Find Similar]: /rest/api/face/face-recognition-operations/find-similar-from-large-face-list
+[Create Person Group]: /rest/api/face/person-group-operations/create-person-group
+[Get Person Group]: /rest/api/face/person-group-operations/get-person-group
+[Train Person Group]: /rest/api/face/person-group-operations/train-person-group
+[Add Person Group Person Face]: /rest/api/face/person-group-operations/add-person-group-person-face
+[Create Large Person Group]: /rest/api/face/person-group-operations/create-large-person-group
+[Create Face List]: /rest/api/face/face-list-operations/create-face-list
+[Get Face List]: /rest/api/face/face-list-operations/get-face-list
+[Create Large Face List]: /rest/api/face/face-list-operations/create-large-face-list
ai-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md
This guide shows you how to scale up from existing **PersonGroup** and **FaceLis
> [!IMPORTANT] > The newer data structure **PersonDirectory** is recommended for new development. It can hold up to 75 million identities and does not require manual training. For more information, see the [PersonDirectory guide](./use-persondirectory.md).
-This guide demonstrates the migration process. It assumes a basic familiarity with **PersonGroup** and **FaceList** objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
+This guide demonstrates the migration process. It assumes a basic familiarity with **PersonGroup** and **FaceList** objects, the **Train** operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
**LargePersonGroup** and **LargeFaceList** are collectively referred to as large-scale operations. **LargePersonGroup** can contain up to 1 million persons, each with a maximum of 248 faces. **LargeFaceList** can contain up to 1 million faces. The large-scale operations are similar to the conventional **PersonGroup** and **FaceList** but have some differences because of the new architecture.
Add all of the faces and persons from the **PersonGroup** to the new **LargePers
| - | Train | | - | Get Training Status |
-The preceding table is a comparison of list-level operations between **FaceList** and **LargeFaceList**. As is shown, **LargeFaceList** comes with new operations, **Train** and **Get Training Status**, when compared with **FaceList**. Training the **LargeFaceList** is a precondition of the
-[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for **FaceList**. The following snippet is a helper function to wait for the training of a **LargeFaceList**:
+The preceding table is a comparison of list-level operations between **FaceList** and **LargeFaceList**. As is shown, **LargeFaceList** comes with new operations, [Train](/rest/api/face/face-list-operations/train-large-face-list) and [Get Training Status](/rest/api/face/face-list-operations/get-large-face-list-training-status), when compared with **FaceList**. Training the **LargeFaceList** is a precondition of the
+[FindSimilar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) operation. Training isn't required for **FaceList**. The following snippet is a helper function to wait for the training of a **LargeFaceList**:
```csharp /// <summary>
private static async Task TrainLargeFaceList(
int timeIntervalInMilliseconds = 1000) { // Trigger a train call.
- await FaceClient.LargeTrainLargeFaceListAsync(largeFaceListId);
+ await FaceClient.LargeFaceList.TrainAsync(largeFaceListId);
// Wait for training finish. while (true) {
- Task.Delay(timeIntervalInMilliseconds).Wait();
- var status = await faceClient.LargeFaceList.TrainAsync(largeFaceListId);
+ await Task.Delay(timeIntervalInMilliseconds);
+ var status = await faceClient.LargeFaceList.GetTrainingStatusAsyn(largeFaceListId);
if (status.Status == Status.Running) {
Previously, a typical use of **FaceList** with added faces and **FindSimilar** l
const string FaceListId = "myfacelistid_001"; const string FaceListName = "MyFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";
-faceClient.FaceList.CreateAsync(FaceListId, FaceListName).Wait();
+await faceClient.FaceList.CreateAsync(FaceListId, FaceListName);
// Add Faces to the FaceList. Parallel.ForEach(
const string QueryImagePath = @"/path/to/query/image";
var results = new List<SimilarPersistedFace[]>(); using (Stream stream = File.OpenRead(QueryImagePath)) {
- var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ var faces = await faceClient.Face.DetectWithStreamAsync(stream);
foreach (var face in faces) { results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, FaceListId, 20));
When migrating it to **LargeFaceList**, it becomes the following:
const string LargeFaceListId = "mylargefacelistid_001"; const string LargeFaceListName = "MyLargeFaceListDisplayName"; const string ImageDir = @"/path/to/FaceList/images";
-faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName).Wait();
+await faceClient.LargeFaceList.CreateAsync(LargeFaceListId, LargeFaceListName);
// Add Faces to the LargeFaceList. Parallel.ForEach(
const string QueryImagePath = @"/path/to/query/image";
var results = new List<SimilarPersistedFace[]>(); using (Stream stream = File.OpenRead(QueryImagePath)) {
- var faces = faceClient.Face.DetectWithStreamAsync(stream).Result;
+ var faces = await faceClient.Face.DetectWithStreamAsync(stream);
foreach (var face in faces) { results.Add(await faceClient.Face.FindSimilarAsync(face.FaceId, largeFaceListId: LargeFaceListId));
As previously shown, the data management and the **FindSimilar** part are almost
## Step 3: Train suggestions
-Although the **Train** operation speeds up **[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)**
-and **[Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239)**, the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
+Although the **Train** operation speeds up [FindSimilar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list)
+and [Identification](/rest/api/face/face-recognition-operations/identify-from-large-person-group), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
| Scale for faces or persons | Estimated training time | |::|::|
ai-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-persondirectory.md
var client = new HttpClient();
// Request headers client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");
-var addPersonUri = "https:// {endpoint}/face/v1.0-preview/persons";
+var addPersonUri = "https://{endpoint}/face/v1.0-preview/persons";
HttpResponseMessage response;
Stopwatch s = Stopwatch.StartNew();
string status = "notstarted"; do {
- if (status == "succeeded")
- {
- await Task.Delay(500);
- }
+ await Task.Delay(500);
var operationResponseMessage = await client.GetAsync(operationLocation);
ai-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/identity-api-reference.md
Azure AI Face is a cloud-based service that provides algorithms for face detection and recognition. The Face APIs comprise the following categories: -- Face Algorithm APIs: Cover core functions such as [Detection](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237), [Verification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), and [Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).-- [DetectLiveness session APIs](https://westus.dev.cognitive.microsoft.com/docs/services/609a5e53f0971cb3/operations/session-create-detectliveness-singlemodal): Used to create and manage a Liveness Detection session. See the [Liveness Detection](/azure/ai-services/computer-vision/tutorials/liveness) tutorial.-- [FaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b): Used to manage a FaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [LargePersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adcba3a7b9412a4d53f40): Used to manage LargePersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargePersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d): Used to manage a LargePersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237).-- [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).-- [PersonDirectory Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/face-v1-0-preview/operations/5f063c5279ef2ecd2da02bbc)-- [PersonDirectory DynamicPersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/face-v1-0-preview/operations/5f066b475d2e298611e11115)-- [Liveness Session APIs](https://westus.dev.cognitive.microsoft.com/docs/services/609a5e53f0971cb3/operations/session-create-detectliveness-singlemodal) and [Liveness-With-Verify Session APIs](https://westus.dev.cognitive.microsoft.com/docs/services/609a5e53f0971cb3/operations/session-create-detectlivenesswithverify-singlemodal): Used to manage liveness sessions from App Server to orchestrate the liveness solution.
+- Face Algorithm APIs: Cover core functions such as [Detection](/rest/api/face/face-detection-operations/detect), [Find Similar](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list), [Verification](/rest/api/face/face-recognition-operations/verify-face-to-face), [Identification](/rest/api/face/face-recognition-operations/identify-from-large-person-group), and [Group](/rest/api/face/face-recognition-operations/group).
+- [DetectLiveness session APIs](/rest/api/face/liveness-session-operations): Used to create and manage a Liveness Detection session. See the [Liveness Detection](/azure/ai-services/computer-vision/tutorials/liveness) tutorial.
+- [FaceList APIs](/rest/api/face/face-list-operations): Used to manage a FaceList for [Find Similar From Face List](/rest/api/face/face-recognition-operations/find-similar-from-face-list).
+- [LargeFaceList APIs](/rest/api/face/face-list-operations): Used to manage a LargeFaceList for [Find Similar From Large Face List](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list).
+- [PersonGroup APIs](/rest/api/face/person-group-operations): Used to manage a PersonGroup dataset for [Identification From Person Group](/rest/api/face/face-recognition-operations/identify-from-person-group).
+- [LargePersonGroup APIs](/rest/api/face/person-group-operations): Used to manage a LargePersonGroup dataset for [Identification From Large Person Group](/rest/api/face/face-recognition-operations/identify-from-large-person-group).
+- [PersonDirectory APIs](/rest/api/face/person-directory-operations): Used to manage a PersonDirectory dataset for [Identification From Person Directory](/rest/api/face/face-recognition-operations/identify-from-person-directory) or [Identification From Dynamic Person Group](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group).
+- [Face API error codes](./reference-face-error-codes.md): A list of all error codes returned by the Face API operations.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/language-support.md
The following table lists the OCR supported languages for print text by the most
## Analyze image
-Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with the Analyze API, or follow the [How-to guide](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) to try them out.
+Some features of the [Analyze - Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API can return results in other languages, specified with the `language` query parameter. Other actions return results in English regardless of what language is specified, and others throw an exception for unsupported languages. Actions are specified with the `visualFeatures` and `details` query parameters; see the [Overview](overview-image-analysis.md) for a list of all the actions you can do with the Analyze API, or follow the [How-to guide](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) to try them out.
| Language | Language code | Categories | Tags | Description | Adult, Brands, Color, Faces, ImageType, Objects | Celebrities, Landmarks | Captions, Dense captions| |:|::|:-:|::|::|::|::|:--:|
ai-services Overview Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md
Previously updated : 07/04/2023 Last updated : 04/30/2024 - ignite-2023
Optionally, face detection can extract a set of face-related attributes, such as
[!INCLUDE [Sensitive attributes notice](./includes/identity-sensitive-attributes.md)]
-For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) reference documentation.
+For more information on face detection and analysis, see the [Face detection](concept-face-detection.md) concepts article. Also see the [Detect API](/rest/api/face/face-detection-operations/detect) reference documentation.
You can try out Face detection quickly and easily in your browser using Vision Studio.
Concepts
Face liveness SDK reference docs: - [Java (Android)](https://aka.ms/liveness-sdk-java) - [Swift (iOS)](https://aka.ms/liveness-sdk-ios)
+- [JavaScript (Web)](https://aka.ms/liveness-sdk-web)
## Face recognition
The verification operation answers the question, "Do these two faces belong to t
Verification is also a "one-to-one" matching of a face in an image to a single face from a secure repository or photo to verify that they're the same individual. Verification can be used for access control, such as a banking app that enables users to open a credit account remotely by taking a new picture of themselves and sending it with a picture of their photo ID. It can also be used as a final check on the results of an Identification API call.
-For more information about Face recognition, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
+For more information about Face recognition, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Identify](/rest/api/face/face-recognition-operations/identify-from-large-person-group) and [Verify](/rest/api/face/face-recognition-operations/verify-face-to-face) API reference documentation.
## Find similar faces The Find Similar operation does face matching between a target face and a set of candidate faces, finding a smaller set of faces that look similar to the target face. This is useful for doing a face search by image.
-The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
+The service supports two working modes, **matchPerson** and **matchFace**. The **matchPerson** mode returns similar faces after filtering for the same person by using the [Verify API](/rest/api/face/face-recognition-operations/verify-face-to-face). The **matchFace** mode ignores the same-person filter. It returns a list of similar candidate faces that may or may not belong to the same person.
The following example shows the target face:
And these images are the candidate faces:
![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg)
-To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) reference documentation.
+To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar-from-large-face-list) reference documentation.
## Group faces The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found.
-All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238) reference documentation.
+All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](/rest/api/face/face-recognition-operations/group) reference documentation.
## Input requirements
ai-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-ocr.md
Previously updated : 07/04/2023 Last updated : 04/30/2024
Intelligent Document Processing (IDP) uses OCR as its foundational technology to
Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences. > [!WARNING]
-> The Azure AI Vision legacy [OCR API in v3.2](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) and [RecognizeText API in v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) operations are not recomended for use.
+> The Azure AI Vision legacy [OCR API in v3.2](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) and [RecognizeText API in v2.1](/rest/api/computervision/recognize-printed-text/recognize-printed-text?view=rest-computervision-v2.1) operations are not recommended for use.
[!INCLUDE [read-editions](includes/read-editions.md)]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview.md
Previously updated : 07/04/2023 Last updated : 04/30/2024 - ignite-2023
ai-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md
Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set
* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings * Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container.
+* Refer to the [Read API](/rest/api/computervision/read/read?view=rest-computervision-v3.2-preview&tabs=HTTP) for details about the methods supported by the container.
* Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality. * Use more [Azure AI containers](../cognitive-services-container-support.md)
ai-services Reference Face Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/reference-face-error-codes.md
+
+ Title: Azure AI Face API error codes
+description: Error codes returned from Face API services.
+++++ Last updated : 05/24/2024++
+# Azure AI Face API error codes
++
+## Common error codes
+
+These error codes can be returned by any Face API calls.
+
+|Http status code|Error code|Error message|Description|
+|-|-|-|--|
+|Bad Request (400)|BadArgument|Request body is invalid.||
+|Bad Request (400)|BadArgument|JSON parsing error.|Bad or unrecognizable request JSON body.|
+|Bad Request (400)|BadArgument|'recognitionModel' is invalid.||
+|Bad Request (400)|BadArgument|'detectionModel' is invalid.||
+|Bad Request (400)|BadArgument|'name' is empty.||
+|Bad Request (400)|BadArgument|'name' is too long.||
+|Bad Request (400)|BadArgument|'userData' is too long.||
+|Bad Request (400)|BadArgument|'start' is too long.||
+|Bad Request (400)|BadArgument|'top' is invalid.||
+|Bad Request (400)|BadArgument|Argument targetFace out of range.||
+|Bad Request (400)|BadArgument|Invalid argument targetFace.|Caused by invalid string format or invalid left/top/height/width value.|
+|Bad Request (400)|InvalidURL|Invalid image URL.|Supported formats include JPEG, PNG, GIF(the first frame) and BMP.|
+|Bad Request (400)|InvalidURL|Invalid image URL or error downloading from target server. Remote server error returned: "An error occurred while sending the request."||
+|Bad Request (400)|InvalidImage|Decoding error, image format unsupported.||
+|Bad Request (400)|InvalidImage|No face detected in the image.||
+|Bad Request (400)|InvalidImage|There is more than 1 face in the image.||
+|Bad Request (400)|InvalidImageSize|Image size is too small.|The valid image file size should be larger than or equal to 1 KB.|
+|Bad Request (400)|InvalidImageSize|Image size is too big.|The valid image file size should be no larger than 6 MB.|
+|Unauthorized (401)|401|Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.||
+|Conflict (409)|ConcurrentOperationConflict|There is a conflict operation on resource `<resourceName>`, please try later.||
+|Too Many Requests (429)|429|Rate limit is exceeded.||
++
+## Face Detection error codes
+
+These error codes can be returned by Face Detection operation.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Invalid argument returnFaceAttributes.||
+|Bad Request (400)|BadArgument|'returnFaceAttributes' is not supported by detection_02.||
+|Bad Request (400)|BadArgument|'returnLandmarks' is not supported by detection_02.||
+
+## Face Liveness Session error codes
+
+These error codes can be returned by Face Liveness Session operations.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Start parameter is invalid. Please specify the 'Id' field of the last entry to continue the listing process.||
+|Bad Request (400)|BadArgument|Top parameter is invalid. Valid range is between 1 and 1000 inclusive.||
+|Bad Request (400)|InvalidRequestBody|Incorrect request body provided. Please check the operation schema and try again.||
+|Bad Request (400)|InvalidTokenLifetime|Invalid authTokenTimeToLiveInSeconds specified. Must be within 60 to 86400.||
+|Bad Request (400)|InvalidLivenessOperationMode|Invalid livenessOperationMode specified. Must be 'Passive'.||
+|Bad Request (400)|InvalidDeviceCorrelationId|A device correlation ID is required in the request body during session create or session start. Must not be null or empty, and be no more than 64 characters.||
+|Not Found (404)|SessionNotFound|Session ID is not found. The session ID is expired or does not exist.||
+
+## Face Identify error codes
+
+These error codes can be returned by Face Identify operation.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|'recognitionModel' is incompatible.||
+|Bad Request (400)|BadArgument|Person group ID is invalid.||
+|Bad Request (400)|BadArgument|Large person group ID is invalid.||
+|Bad Request (400)|BadArgument|Dynamic person group ID is invalid.||
+|Bad Request (400)|BadArgument|The argument maxNumOfCandidatesReturned is not valid.|The valid range is between [1, 100].|
+|Bad Request (400)|BadArgument|The argument confidenceThreshold is not valid.|The valid range is between [0, 1].|
+|Bad Request (400)|BadArgument|The length of faceIds is not in a valid range.|The valid range is between [1, 10].|
+|Bad Request (400)|FaceNotFound|Face is not found.||
+|Bad Request (400)|PersonGroupNotFound|Person group is not found.||
+|Bad Request (400)|LargePersonGroupNotFound|Large person group is not found.||
+|Bad Request (400)|DynamicPersonGroupNotFound|Dynamic person group is not found.||
+|Bad Request (400)|PersonGroupNotTrained|Person group not trained.||
+|Bad Request (400)|LargePersonGroupNotTrained|Large person group not trained.||
+|Bad Request (400)|PersonGroupIdAndLargePersonGroupIdBothNotNull|Large person group ID and person group ID are both not null.||
+|Bad Request (400)|PersonGroupIdAndLargePersonGroupIdBothNull|Large person group ID and person group ID are both null.||
+|Bad Request (400)|MissingIdentificationScopeParameters|No identification scope parameter is present in the request.||
+|Bad Request (400)|IncompatibleIdentificationScopeParametersCombination|Incompatible identification scope parameters are present in the request.||
+|Conflict (409)|PersonGroupTrainingNotFinished|Person group is under training.||
+|Conflict (409)|LargePersonGroupTrainingNotFinished|Large person group is under training.||
+
+## Face Verify error codes
+
+These error codes can be returned by Face Verify operation.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|'recognitionModel' is incompatible.||
+|Bad Request (400)|BadArgument|Face ID is invalid.|A valid faceId comes from Face - Detect.|
+|Bad Request (400)|BadArgument|Person ID is invalid.|A valid personId is generated from Create Person Group Person, Create Large Person Group Person or Person Directory - Create Person.|
+|Bad Request (400)|BadArgument|Person group ID is invalid.||
+|Bad Request (400)|BadArgument|Large person group ID is invalid.||
+|Bad Request (400)|PersonNotFound|Person is not found.||
+|Bad Request (400)|PersonGroupNotFound|Person Group is not found.||
+|Bad Request (400)|LargePersonGroupNotFound|Large Person Group is not found.||
+|Not Found (404)|FaceNotFound|Face is not found.||
+|Not Found (404)|PersonNotFound|Person is not found.||
+|Not Found (404)|PersistedFaceNotFound|No persisted face of the person is found.||
+
+## Find Similar error codes
+
+These error codes can be returned by Face Find Similar operation.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|'recognitionModel' is incompatible.||
+|Bad Request (400)|BadArgument|Mode is invalid.||
+|Bad Request (400)|BadArgument|Face list ID is invalid.||
+|Bad Request (400)|BadArgument|Large face list ID is invalid.||
+|Bad Request (400)|BadArgument|LargeFaceListId, faceListId and faceIds, not exactly one of them is valid.||
+|Bad Request (400)|BadArgument|LargeFaceListId, faceListId and faceIds are all null.||
+|Bad Request (400)|BadArgument|2 or more of largeFaceListId, faceListId and faceIds are not null.||
+|Bad Request (400)|BadArgument|The argument maxNumOfCandidatesReturned is not valid.|The valid range is between [1, 1000].|
+|Bad Request (400)|BadArgument|The length of faceIds is not in a valid range.|The valid range is between [1, 1000].|
+|Bad Request (400)|FaceNotFound|Face is not found.||
+|Bad Request (400)|FaceListNotFound|Face list is not found.||
+|Bad Request (400)|LargeFaceListNotFound|Large face list is not found.||
+|Bad Request (400)|LargeFaceListNotTrained|Large face list is not trained.||
+|Bad Request (400)|FaceListNotReady|Face list is empty.||
+|Conflict (409)|LargeFaceListTrainingNotFinished|Large face list is under training.||
+
+## Face Group error codes
+
+These error codes can be returned by Face Group operation.
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|'recognitionModel' is incompatible.||
+|Bad Request (400)|BadArgument|The length of faceIds is not in a valid range.|The valid range is between [2, 1000].|
++
+## Person Group operations
+
+These error codes can be returned by Person Group operations.
+
+### Person Group error codes
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Person group ID is invalid.|Valid character is English letter in lower case, digit, '-' or '_'. Maximum length is 64.|
+|Forbidden (403)|QuotaExceeded|Person group number reached subscription level limit.||
+|Forbidden (403)|QuotaExceeded|Person number reached person group level limit.||
+|Forbidden (403)|QuotaExceeded|Person number reached subscription level limit.||
+|Forbidden (403)|QuotaExceeded|Persisted face number reached limit.||
+|Not Found (404)|PersonGroupNotFound|Person group is not found.||
+|Not Found (404)|PersonGroupNotFound|Person group ID is invalid.||
+|Not Found (404)|PersonNotFound|Person `<personId>` is not found.||
+|Not Found (404)|PersonNotFound|Person ID is invalid.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face `<faceId>` is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face ID is invalid.||
+|Not Found (404)|PersonGroupNotTrained|Person group not trained.|This error appears on getting training status of a group which never been trained.|
+|Conflict (409)|PersonGroupExists|Person group already exists.||
+|Conflict (409)|PersonGroupTrainingNotFinished|Person group is under training.|Try again after training completed.|
+
+### Large Person Group error codes
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Large person group ID is invalid.|Valid character is English letter in lower case, digit, '-' or '_'. Maximum length is 64.|
+|Bad Request (400)|BadArgument|Both 'name' and 'userData' are empty.||
+|Forbidden (403)|QuotaExceeded|Large person group number reached subscription level limit.||
+|Forbidden (403)|QuotaExceeded|Person number reached large person group level limit.||
+|Forbidden (403)|QuotaExceeded|Person number reached subscription level limit.||
+|Forbidden (403)|QuotaExceeded|Persisted face number reached limit.||
+|Not Found (404)|LargePersonGroupNotFound|Large person group is not found.||
+|Not Found (404)|LargePersonGroupNotFound|Large person group ID is invalid.||
+|Not Found (404)|PersonNotFound|Person `<personId>` is not found.||
+|Not Found (404)|PersonNotFound|Person ID is invalid.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face `<faceId>` is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face ID is invalid.||
+|Not Found (404)|LargePersonGroupNotTrained|Large person group not trained.|This error appears on getting training status of a group which never been trained.|
+|Conflict (409)|LargePersonGroupExists|Large person group already exists.||
+|Conflict (409)|LargePersonGroupTrainingNotFinished|Large person group is under training.|Try again after training completed.|
+
+## Face List operations
+
+These error codes can be returned by Face List operations.
+
+### Face List error codes
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Face list ID is invalid.|Valid character is English letter in lower case, digit, '-' or '_'. Maximum length is 64.|
+|Forbidden (403)|QuotaExceeded|Persisted face number reached limit.||
+|Not Found (404)|FaceListNotFound|Face list is not found.||
+|Not Found (404)|FaceListNotFound|Face list ID is invalid.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face ID is invalid.||
+|Conflict (409)|FaceListExists|Face list already exists.||
+
+### Large Face List error codes
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Large face list ID is invalid.|Valid character is English letter in lower case, digit, '-' or '_'. Maximum length is 64.|
+|Bad Request (400)|BadArgument|Both 'name' and 'userData' are empty.||
+|Forbidden (403)|QuotaExceeded|Large Face List number reached limit.||
+|Forbidden (403)|QuotaExceeded|Persisted face number reached limit.||
+|Not Found (404)|LargeFaceListNotFound|Large face list is not found.||
+|Not Found (404)|LargeFaceListNotFound|Large face list ID is invalid.||
+|Not Found (404)|PersistedFaceNotFound|Large Face List Face `<faceId>` is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face ID is invalid.||
+|Not Found (404)|LargeFaceListNotTrained|Large face list not trained.|This error appears on getting training status of a large face list which never been trained.|
+|Conflict (409)|LargeFaceListExists|Large face list already exists.||
+|Conflict (409)|LargeFaceListTrainingNotFinished|Large face list is under training.|Try again after training completed.|
+
+## Person Directory operations
+
+These error codes can be returned by Person Directory operations.
+
+### Person Directory error codes
+
+|Http status code|Error code|Error message|Description|
+|||||
+|Bad Request (400)|BadArgument|Recognition model is not supported for this feature.||
+|Bad Request (400)|BadArgument|'start' is not valid person ID.||
+|Bad Request (400)|BadArgument|Both 'name' and 'userData' are empty.||
+|Bad Request (400)|DynamicPersonGroupNotFound|Dynamic person group ID is invalid.||
+|Forbidden (403)|QuotaExceeded|Person number reached subscription level limit.||
+|Forbidden (403)|QuotaExceeded|Persisted face number reached limit.||
+|Not Found (404)|DynamicPersonGroupNotFound|Dynamic person group was not found.||
+|Not Found (404)|DynamicPersonGroupNotFound|DynamicPersonGroupPersonReference `<groupId>` is not found.||
+|Not Found (404)|PersonNotFound|Person is not found.||
+|Not Found (404)|PersonNotFound|Person ID is invalid.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted face `<faceId>` is not found.||
+|Not Found (404)|PersistedFaceNotFound|Persisted Face ID is invalid.||
+|Conflict (409)|DynamicPersonGroupExists|Dynamic person group ID `<groupId>` already exists.||
+
+Next steps
+
+- [Face API reference](/rest/api/face/operation-groups)
ai-services Overview Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md
The Image Analysis SDK (preview) provides a convenient way to access the Image A
> The Image Analysis SDK was rewritten in version 1.0.0-beta.1 to better align with other Azure SDKs. All APIs have changed. See the updated [quickstart](/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40), [samples](#github-samples) and [how-to-guides](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) for information on how to use the new SDK. > > Major changes:
-> - The SDK now calls the generally available [Computer Vision REST API (2023-10-01)](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2023-10-01), instead of the preview [Computer Vision REST API (2023-04-01-preview)](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/61d65934cd35050c20f73ab6).
+> - The SDK now calls the generally available [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01), instead of the preview [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview).
> - Support for JavaScript was added. > - C++ is no longer supported.
-> - Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2023-10-01) does not yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/61d65934cd35050c20f73ab6) directly (using the `Analyze` and `Segment` operations respectively).
+> - Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01) does not yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview) directly (using the `Analyze` and `Segment` operations respectively).
## Supported languages
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/whats-new.md
See the [language support](/azure/ai-services/computer-vision/language-support#m
The Image Analysis SDK was rewritten in version 1.0.0-beta.1 to better align with other Azure SDKs. All APIs have changed. See the updated [quickstarts](/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40), [samples](/azure/ai-services/computer-vision/sdk/overview-sdk#github-samples) and [how-to-guides](/azure/ai-services/computer-vision/how-to/call-analyze-image-40) for information on how to use the new SDK. Major changes:-- The SDK now calls the generally available [Computer Vision REST API (2023-10-01)](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2023-10-01), instead of the preview [Computer Vision REST API (2023-04-01-preview)](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/61d65934cd35050c20f73ab6).
+- The SDK now calls the generally available [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01), instead of the preview [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview).
- Support for JavaScript was added. - C++ is no longer supported.-- Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2023-10-01) doesn't yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/61d65934cd35050c20f73ab6) directly (using the `Analyze` and `Segment` operations respectively).
+- Image Analysis with a custom model, and Image Segmentation (background removal) are no longer supported in the SDK, because the [Computer Vision REST API (2023-10-01)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-10-01) doesn't yet support them. To use either feature, call the [Computer Vision REST API (2023-04-01-preview)](/rest/api/computervision/operation-groups?view=rest-computervision-2023-04-01-preview) directly (using the `Analyze` and `Segment` operations respectively).
## November 2023
As part of the Image Analysis 4.0 API, the [Background removal API](./concept-ba
### Azure AI Vision 3.0 & 3.1 previews deprecation The preview versions of the Azure AI Vision 3.0 and 3.1 APIs are scheduled to be retired on September 30, 2023. Customers won't be able to make any calls to these APIs past this date. Customers are encouraged to migrate their workloads to the generally available (GA) 3.2 API instead. Mind the following changes when migrating from the preview versions to the 3.2 API:-- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they use the latest model.-- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.-- Azure AI Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+- The [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) and [Read](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they use the latest model.
+- The [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) and [Read](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+- Azure AI Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2) to learn how to adjust any error-handling code.
## October 2022
Vision Studio provides you with a platform to try several service features, and
### Azure AI Vision 3.2-preview deprecation The preview versions of the 3.2 API are scheduled to be retired in December of 2022. Customers are encouraged to use the generally available (GA) version of the API instead. Mind the following changes when migrating from the 3.2-preview versions:
-1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they use the latest model.
-1. The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
-1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
+1. The [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) and [Read](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) API calls now take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they use the latest model.
+1. The [Analyze Image](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) and [Read](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) API calls also return a `model-version` field in successful API responses. This field reports which model was used.
+1. Image Analysis APIs now use a different error-reporting format. See the [API reference documentation](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) to learn how to adjust any error-handling code.
## May 2022
See the [OCR how-to guide](how-to/call-read-api.md#determine-how-to-process-the-
### Image tagging language expansion
-The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
+The [latest version (v3.2)](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
## July 2021
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
The Azure AI Vision API v3.2 is now generally available with the following updates:
-* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
-* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* Improved image tagging model: analyzes visual content and generates relevant tags based on objects, actions, and content displayed in the image. This model is available through the [Tag Image API](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
+* Updated content moderation model: detects presence of adult content and provides flags to filter images containing adult, racy, and gory visual content. This model is available through the [Analyze API](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2). See the Image Analysis [how-to guide](./how-to/call-analyze-image.md) and [overview](./overview-image-analysis.md) to learn more.
* [OCR (Read) available for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages. * [OCR (Read)](./overview-ocr.md) also available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premises deployment. > [!div class="nextstepaction"]
-> [See Azure AI Vision v3.2 GA](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+> [See Azure AI Vision v3.2 GA](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2)
### PersonDirectory data structure (preview)
The Azure AI Vision API v3.2 is now generally available with the following updat
The Azure AI Vision API v3.2 public preview has been updated. The preview release has all Azure AI Vision features along with updated Read and Analyze APIs. > [!div class="nextstepaction"]
-> [See Azure AI Vision v3.2 public preview 3](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+> [See Azure AI Vision v3.2 public preview 3](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2-preview)
## February 2021
The Azure AI Vision Read API v3.2 public preview, available as cloud service and
See the [Read API how-to guide](how-to/call-read-api.md) to learn more. > [!div class="nextstepaction"]
-> [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005)
+> [Use the Read API v3.2 Public Preview](/rest/api/computervision/operation-groups?view=rest-computervision-v3.2-preview)
### New Face API detection model
A new version of the [Spatial Analysis container](spatial-analysis-container.md)
## December 2020 ### Customer configuration for Face ID storage
-* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a), and [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API reference (the *faceIdTimeToLive* parameter).
+* While the Face Service does not store customer images, the extracted face feature(s) will be stored on server. The Face ID is an identifier of the face feature and will be used in [Face - Identify](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group), [Face - Verify](/rest/api/face/face-recognition-operations/verify-face-to-face), and [Face - Find Similar](/rest/api/face/face-recognition-operations/find-similar). The stored face features will expire and be deleted 24 hours after the original detection call. Customers can now determine the length of time these Face IDs are cached. The maximum value is still up to 24 hours, but a minimum value of 60 seconds can now be set. The new time ranges for Face IDs being cached is any value between 60 seconds and 24 hours. More details can be found in the [Face - Detect](/rest/api/face/face-detection-operations) API reference (the *faceIdTimeToLive* parameter).
## November 2020 ### Sample Face enrollment app
The Azure AI Vision Read API v3.1 public preview adds these capabilities:
See the [Read API how-to guide](how-to/call-read-api.md) to learn more. > [!div class="nextstepaction"]
-> [Learn more about Read API v3.1 Public Preview 2](https://westus2.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-2/operations/5d986960601faab4bf452005)
+> [Learn more about Read API v3.1 Public Preview 2](/rest/api/computervision/operation-groups?view=rest-computervision-v3.1-preview)
## August 2020 ### Customer-managed encryption of data at rest
The Azure AI Vision Read API v3.1 public preview adds support for Simplified Chi
See the [Read API how-to guide](how-to/call-read-api.md) to learn more. > [!div class="nextstepaction"]
-> [Learn more about Read API v3.1 Public Preview 1](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-preview-1/operations/5d986960601faab4bf452005)
+> [Learn more about Read API v3.1 Public Preview 1](/rest/api/computervision/operation-groups?view=rest-computervision-v3.1-preview)
## May 2020
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## June 2019 ### New Face API detection model
-* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
+* The new Detection 02 model features improved accuracy on small, side-view, occluded, and blurry faces. Use it through [Face - Detect](/rest/api/face/face-detection-operations), [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face), [LargeFaceList - Add Face](/rest/api/face/face-list-operations/add-large-face-list-face), [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face) and [LargePersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) by specifying the new face detection model name `detection_02` in `detectionModel` parameter. More details in [How to specify a detection model](how-to/specify-detection-model.md).
## April 2019 ### Improved attribute accuracy
-* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Improved overall accuracy of the `age` and `headPose` attributes. The `headPose` attribute is also updated with the `pitch` value enabled now. Use these attributes by specifying them in the `returnFaceAttributes` parameter of [Face - Detect](/rest/api/face/face-detection-operations) `returnFaceAttributes` parameter.
### Improved processing speeds
-* Improved speeds of [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42) operations.
+* Improved speeds of [Face - Detect](/rest/api/face/face-detection-operations), [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face), [LargeFaceList - Add Face](/rest/api/face/face-list-operations/add-large-face-list-face), [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face) and [LargePersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) operations.
## March 2019 ### New Face API recognition model
-* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039524b), [LargeFaceList - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc), [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) and [LargePersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
+* The Recognition 02 model has improved accuracy. Use it through [Face - Detect](/rest/api/face/face-detection-operations), [FaceList - Create](/rest/api/face/face-list-operations/create-face-list), [LargeFaceList - Create](/rest/api/face/face-list-operations/create-large-face-list), [PersonGroup - Create](/rest/api/face/person-group-operations/create-person-group) and [LargePersonGroup - Create](/rest/api/face/person-group-operations/create-large-person-group) by specifying the new face recognition model name `recognition_02` in `recognitionModel` parameter. More details in [How to specify a recognition model](how-to/specify-recognition-model.md).
## January 2019 ### Face Snapshot feature
-* This feature allows the service to support data migration across subscriptions: [Snapshot](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-get).
+* This feature allows the service to support data migration across subscriptions: [Snapshot](/rest/api/face/snapshot?view=rest-face-v1.0-preview).
> [!IMPORTANT] > As of June 30, 2023, the Face Snapshot API is retired.
Follow an [Extract text quickstart](https://github.com/Azure-Samples/cognitive-s
## October 2018 ### API messages
-* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395247), [LargePersonGroup - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae32c6ac60f11b48b5aa5), and [LargeFaceList - Get Training Status](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a1582f8d2de3616c086f2cf).
+* Refined description for `status`, `createdDateTime`, `lastActionDateTime`, and `lastSuccessfulTrainingDateTime` in [PersonGroup - Get Training Status](/rest/api/face/person-group-operations/get-person-group-training-status), [LargePersonGroup - Get Training Status](/rest/api/face/person-group-operations/get-large-person-group-training-status), and [LargeFaceList - Get Training Status](/rest/api/face/face-list-operations/get-large-face-list-training-status).
## May 2018 ### Improved attribute accuracy
-* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Improved `gender` attribute significantly and also improved `age`, `glasses`, `facialHair`, `hair`, `makeup` attributes. Use them through [Face - Detect](/rest/api/face/face-detection-operations) `returnFaceAttributes` parameter.
### Increased file size limit
-* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250), [LargeFaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a158c10d2de3616c086f2d3), [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) and [LargePersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599adf2a3a7b9412a4d53f42).
+* Increased input image file size limit from 4 MB to 6 MB in [Face - Detect](/rest/api/face/face-detection-operations), [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face), [LargeFaceList - Add Face](/rest/api/face/face-list-operations/add-large-face-list-face), [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face) and [LargePersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-large-person-group-person-face).
## March 2018 ### New data structure
-* [LargeFaceList](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc) and [LargePersonGroup](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599acdee6ac60f11b48b5a9d). More details in [How to scale to handle more enrolled users](how-to/use-large-scale.md).
-* Increased [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
+* [LargeFaceList](/rest/api/face/face-list-operations/create-large-face-list) and [LargePersonGroup](/rest/api/face/person-group-operations/create-large-person-group). More details in [How to scale to handle more enrolled users](how-to/use-large-scale.md).
+* Increased [Face - Identify](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group) `maxNumOfCandidatesReturned` parameter from [1, 5] to [1, 100] and default to 10.
## May 2017 ### New detectable Face attributes
-* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
-* Supported 10K persons in a PersonGroup and [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Supported pagination in [PersonGroup Person - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395241) with optional parameters: `start` and `top`.
+* Added `hair`, `makeup`, `accessory`, `occlusion`, `blur`, `exposure`, and `noise` attributes in [Face - Detect](/rest/api/face/face-detection-operations) `returnFaceAttributes` parameter.
+* Supported 10K persons in a PersonGroup and [Face - Identify](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group).
+* Supported pagination in [PersonGroup Person - List](/rest/api/face/person-group-operations/get-person-group-persons) with optional parameters: `start` and `top`.
* Supported concurrency in adding/deleting faces against different FaceLists and different persons in PersonGroup. ## March 2017 ### New detectable Face attribute
-* Added `emotion` attribute in [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) `returnFaceAttributes` parameter.
+* Added `emotion` attribute in [Face - Detect](/rest/api/face/face-detection-operations) `returnFaceAttributes` parameter.
### Fixed issues
-* Face could not be re-detected with rectangle returned from [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) as `targetFace` in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+* Face could not be re-detected with rectangle returned from [Face - Detect](/rest/api/face/face-detection-operations) as `targetFace` in [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face) and [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face).
* The detectable face size is set to ensure it is strictly between 36x36 to 4096x4096 pixels. ## November 2016 ### New subscription tier
-* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) or [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
+* Added Face Storage Standard subscription to store additional persisted faces when using [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face) or [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face) for identification or similarity matching. The stored images are charged at $0.5 per 1000 faces and this rate is prorated on a daily basis. Free tier subscriptions continue to be limited to 1,000 total persons.
## October 2016 ### API messages
-* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395250) and [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b).
+* Changed the error message of more than one face in the `targetFace` from 'There are more than one face in the image' to 'There is more than one face in the image' in [FaceList - Add Face](/rest/api/face/face-list-operations/add-face-list-face) and [PersonGroup Person - Add Face](/rest/api/face/person-group-operations/add-person-group-person-face).
## July 2016 ### New features
-* Supported Face to Person object authentication in [Face - Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a).
-* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and default is `matchPerson`.
-* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
-* Added optional `start` and `top` parameters in [PersonGroup - List](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395248) to enable user to specify the start point and the total PersonGroups number to list.
+* Supported Face to Person object authentication in [Face - Verify](/rest/api/face/face-recognition-operations/verify-face-to-face).
+* Added optional `mode` parameter enabling selection of two working modes: `matchPerson` and `matchFace` in [Face - Find Similar](/rest/api/face/face-recognition-operations/find-similar) and default is `matchPerson`.
+* Added optional `confidenceThreshold` parameter for user to set the threshold of whether one face belongs to a Person object in [Face - Identify](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group).
+* Added optional `start` and `top` parameters in [PersonGroup - List](/rest/api/face/person-group-operations/get-person-groups) to enable user to specify the start point and the total PersonGroups number to list.
## V1.0 changes from V0 * Updated service root endpoint from ```https://westus.api.cognitive.microsoft.com/face/v0/``` to ```https://westus.api.cognitive.microsoft.com/face/v1.0/```. Changes applied to:
- [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236), [Face - Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), [Face - Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) and [Face - Group](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395238).
+ [Face - Detect](/rest/api/face/face-detection-operations), [Face - Identify](/rest/api/face/face-recognition-operations/identify-from-dynamic-person-group), [Face - Find Similar](/rest/api/face/face-recognition-operations/find-similar) and [Face - Group](/rest/api/face/face-recognition-operations/group).
* Updated the minimal detectable face size to 36x36 pixels. Faces smaller than 36x36 pixels will not be detected. * Deprecated the PersonGroup and Person data in Face V0. Those data cannot be accessed with the Face V1.0 service. * Deprecated the V0 endpoint of Face API on June 30, 2016.
ai-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/api-reference.md
You can use the following **Content Moderator APIs** to set up your post-moderat
| Description | Reference | | -- |-|
-| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
-| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
+| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](/rest/api/cognitiveservices/contentmoderator/image-moderation) |
+| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](/rest/api/cognitiveservices/contentmoderator/text-moderation) |
| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
-| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API reference") |
+| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](/rest/api/cognitiveservices/contentmoderator/list-management-image-lists) |
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/export-delete-data.md
For more information on how to export and delete user data in Content Moderator,
| Data | Export Operation | Delete Operation | | - | - | - | | Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). |
-| Images for custom matching | Call the [Get image IDs API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f676). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f686). Or delete the Content Moderator resource using the Azure portal. |
-| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
+| Images for custom matching | Call the [Get image IDs API](/rest/api/cognitiveservices/contentmoderator/list-management-image/get-all-image-ids). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](/rest/api/cognitiveservices/contentmoderator/list-management-image/delete-all-images). Or delete the Content Moderator resource using the Azure portal. |
+| Terms for custom matching | Cal the [Get all terms API](/rest/api/cognitiveservices/contentmoderator/list-management-term/get-all-terms) | Call the [Delete all terms API](/rest/api/cognitiveservices/contentmoderator/list-management-term/delete-all-terms). Or delete the Content Moderator resource using the Azure portal. |
ai-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/image-moderation-api.md
Instead of moderating the same image multiple times, you add the offensive image
> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**. >
-The Content Moderator provides a complete [Image List Management API](try-image-list-api.md) with operations for managing lists of custom images. Start with the [Image Lists API Console](try-image-list-api.md) and use the REST API code samples. Also check out the [Image List .NET quickstart](image-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
+The Content Moderator provides a complete Image List Management API with operations for managing lists of custom images. Check out the [Image List .NET quickstart](image-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
## Matching against your custom lists
Example extract:
## Next steps
-Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples.
+Test drive the [Quickstart](client-libraries.md) and use the REST API code samples.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/overview.md
You may want to build content filtering software into your app to comply with re
This documentation contains the following article types: * [**Quickstarts**](client-libraries.md) are getting-started instructions to guide you through making requests to the service.
-* [**How-to guides**](try-text-api.md) contain instructions for using the service in more specific or customized ways.
+* [**How-to guides**](video-moderation-api.md) contain instructions for using the service in more specific or customized ways.
* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features. For a more structured approach, follow a Training module for Content Moderator.
The following table describes the different types of moderation APIs.
| API group | Description | | | -- | |[**Text moderation**](text-moderation-api.md)| Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.|
-|[**Custom term lists**](try-terms-list-api.md)| Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.|
+|[**Custom term lists**](term-lists-quickstart-dotnet.md)| Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.|
|[**Image moderation**](image-moderation-api.md)| Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.|
-|[**Custom image lists**](try-image-list-api.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.|
+|[**Custom image lists**](image-lists-quickstart-dotnet.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.|
|[**Video moderation**](video-moderation-api.md)| Scans videos for adult or racy content and returns time markers for said content.| ## Data privacy and security
ai-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/text-moderation-api.md
The service response includes the following information:
## Profanity
-If the API detects any profane terms in any of the [supported languages](./language-support.md), those terms are included in the response. The response also contains their location (`Index`) in the original text. The `ListId` in the following sample JSON refers to terms found in [custom term lists](try-terms-list-api.md) if available.
+If the API detects any profane terms in any of the [supported languages](./language-support.md), those terms are included in the response. The response also contains their location (`Index`) in the original text. The `ListId` in the following sample JSON refers to terms found in custom term lists if available.
```json "Terms": [
The following example shows the matching List ID:
} ```
-The Content Moderator provides a [Term List API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) with operations for managing custom term lists. Start with the [Term Lists API Console](try-terms-list-api.md) and use the REST API code samples. Also check out the [Term Lists .NET quickstart](term-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
+The Content Moderator provides a [Term List API](/rest/api/cognitiveservices/contentmoderator/list-management-term-lists) with operations for managing custom term lists. Check out the [Term Lists .NET quickstart](term-lists-quickstart-dotnet.md) if you are familiar with Visual Studio and C#.
## Next steps
-Test out the APIs with the [Text moderation API console](try-text-api.md).
+Test out the APIs with the [Quickstart](client-libraries.md).
ai-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-api.md
- Title: Moderate images with the API Console - Content Moderator-
-description: Use the Image Moderation API in Azure Content Moderator to scan image content.
-#
---- Previously updated : 01/18/2024----
-# Moderate images from the API console
-
-Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to scan image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
-
-## Use the API console
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-1. Go to [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c).
-
- The **Image - Evaluate** image moderation page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Try Image - Evaluate page region selection](images/test-drive-region.png)
-
- The **Image - Evaluate** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
- ![Try Image - Evaluate console subscription key](images/try-image-api-1.png)
-
-4. In the **Request body** box, use the default sample image, or specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL for an image.
-
- For this example, use the path provided in the **Request body** box, and then select **Send**.
-
- ![Try Image - Evaluate console Request body](images/try-image-api-2.png)
-
- This is the image at that URL:
-
- ![Try Image - Evaluate console sample image](images/sample-image.jpg)
-
-5. Select **Send**.
-
-6. The API returns a probability score for each classification. It also returns a determination of whether the image meets the conditions (**true** or **false**).
-
- ![Try Image - Evaluate console probability score and condition determination](images/try-image-api-3.png)
-
-## Face detection
-
-You can use the Image Moderation API to locate faces in an image. This option can be useful when you have privacy concerns and want to prevent a specific face from being posted on your platform.
-
-1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **Find Faces**.
-
- The **Image - Find Faces** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Try Image - Find Faces page region selection](images/test-drive-region.png)
-
- The **Image - Find Faces** API console opens.
-
-3. Specify an image to scan. You can submit the image itself as binary bit data, or specify a publicly accessible URL to an image. This example links to an image that's used in a CNN story.
-
- ![Try Image - Find Faces sample image](images/try-image-api-face-image.jpg)
-
- ![Try Image - Find Faces sample request](images/try-image-api-face-request.png)
-
-4. Select **Send**. In this example, the API finds two faces, and returns their coordinates in the image.
-
- ![Try Image - Find Faces sample Response content box](images/try-image-api-face-response.png)
-
-## Text detection via OCR capability
-
-You can use the Content Moderator OCR capability to detect text in images.
-
-1. In the [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c), in the left menu, under **Image**, select **OCR**.
-
- The **Image - OCR** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - OCR page region selection](images/test-drive-region.png)
-
- The **Image - OCR** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-4. In the **Request body** box, use the default sample image. This is the same image that's used in the preceding section.
-
-5. Select **Send**. The extracted text is displayed in JSON:
-
- ![Image - OCR sample Response content box](images/try-image-api-ocr.png)
-
-## Next steps
-
-Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to add image moderation to your application.
ai-services Try Image List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-image-list-api.md
- Title: Moderate images with custom lists and the API console - Content Moderator-
-description: You use the List Management API in Azure Content Moderator to create custom lists of images.
-#
---- Previously updated : 01/18/2024----
-# Moderate with custom image lists in the API console
-
-You use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672) in Azure Content Moderator to create custom lists of images. Use the custom lists of images with the Image Moderation API. The image moderation operation evaluates your image. If you create custom lists, the operation also compares it to the images in your custom lists. You can use custom lists to block or allow the image.
-
-> [!NOTE]
-> There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**.
->
-
-You use the List Management API to do the following tasks:
--- Create a list.-- Add images to a list.-- Screen images against the images in a list.-- Delete images from a list.-- Delete a list.-- Edit list information.-- Refresh the index so that changes to the list are included in a new scan.-
-## Use the API console
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Refresh search index
-
-After you make changes to an image list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Refresh Search Index**.
-
- The **Image Lists - Refresh Search Index** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Refresh Search Index page region selection](images/test-drive-region.png)
-
- The **Image Lists - Refresh Search Index** API console opens.
-
-3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
-
- ![Image Lists - Refresh Search Index console Response content box](images/try-image-list-refresh-1.png)
--
-## Create an image list
-
-1. Go to the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672).
-
- The **Image Lists - Create** page opens.
-
-3. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Create page region selection](images/test-drive-region.png)
-
- The **Image Lists - Create** API console opens.
-
-4. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-5. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
-
- ![Image Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
-
-6. Use key-value pair placeholders to assign more descriptive metadata to your list.
-
- ```json
- {
- "Name": "MyExclusionList",
- "Description": "MyListDescription",
- "Metadata":
- {
- "Category": "Competitors",
- "Type": "Exclude"
- }
- }
- ```
-
- Add list metadata as key-value pairs, and not the actual images.
-
-7. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other image list management functions.
-
- ![Image Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
-
-8. Next, add images to MyList. In the left menu, select **Image**, and then select **Add Image**.
-
- The **Image - Add Image** page opens.
-
-9. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - Add Image page region selection](images/test-drive-region.png)
-
- The **Image - Add Image** API console opens.
-
-10. In the **listId** box, enter the list ID that you generated, and then enter the URL of the image that you want to add. Enter your subscription key, and then select **Send**.
-
-11. To verify that the image has been added to the list, in the left menu, select **Image**, and then select **Get All Image Ids**.
-
- The **Image - Get All Image Ids** API console opens.
-
-12. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
-
- ![Image - Get All Image Ids console Response content box lists the images that you entered](images/try-image-list-create-11.png)
-
-10. Add a few more images. Now that you have created a custom list of images, try [evaluating images](try-image-api.md) by using the custom image list.
-
-## Delete images and lists
-
-Deleting an image or a list is straightforward. You can use the API to do the following tasks:
--- Delete an image. (**Image - Delete**)-- Delete all the images in a list without deleting the list. (**Image - Delete All Images**)-- Delete a list and all of its contents. (**Image Lists - Delete**)-
-This example deletes a single image:
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image**, and then select **Delete**.
-
- The **Image - Delete** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image - Delete page region selection](images/test-drive-region.png)
-
- The **Image - Delete** API console opens.
-
-3. In the **listId** box, enter the ID of the list to delete an image from. This is the number returned in the **Image - Get All Image Ids** console for MyList. Then, enter the **ImageId** of the image to delete.
-
-In our example, the list ID is **58953**, the value for **ContentSource**. The image ID is **59021**, the value for **ContentIds**.
-
-1. Enter your subscription key, and then select **Send**.
-
-1. To verify that the image has been deleted, use the **Image - Get All Image Ids** console.
-
-## Change list information
-
-You can edit a listΓÇÖs name and description, and add metadata items.
-
-1. In the [Image List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f672), in the left menu, select **Image Lists**, and then select **Update Details**.
-
- The **Image Lists - Update Details** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Image Lists - Update Details page region selection](images/test-drive-region.png)
-
- The **Image Lists - Update Details** API console opens.
-
-3. In the **listId** box, enter the list ID, and then enter your subscription key.
-
-4. In the **Request body** box, make your edits, and then select the **Send** button on the page.
-
- ![Image Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
-
-
-## Next steps
-
-Use the REST API in your code or start with the [Image lists .NET quickstart](image-lists-quickstart-dotnet.md) to integrate with your application.
ai-services Try Terms List Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-terms-list-api.md
- Title: Moderate text with custom term lists - Content Moderator-
-description: Use the List Management API to create custom lists of terms to use with the Text Moderation API.
-#
---- Previously updated : 01/18/2024----
-# Moderate with custom term lists in the API console
-
-The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization. For example, you might want to tag competitor names for further review.
-
-Use the [List Management API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f) to create custom lists of terms to use with the Text Moderation API. The **Text - Screen** operation scans your text for profanity, and also compares text against custom and shared blocklists.
-
-> [!NOTE]
-> There is a maximum limit of **5 term lists** with each list to **not exceed 10,000 terms**.
->
-
-You can use the List Management API to do the following tasks:
-- Create a list.-- Add terms to a list.-- Screen terms against the terms in a list.-- Delete terms from a list.-- Delete a list.-- Edit list information.-- Refresh the index so that changes to the list are included in a new scan.-
-## Use the API console
-
-Before you can test-drive the API in the online console, you need your subscription key. This key is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Refresh search index
-
-After you make changes to a term list, you must refresh its index for changes to be included in future scans. This step is similar to how a search engine on your desktop (if enabled) or a web search engine continually refreshes its index to include new files or pages.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Refresh Search Index**.
-
- The **Term Lists - Refresh Search Index** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Refresh Search Index page region selection](images/test-drive-region.png)
-
- The **Term Lists - Refresh Search Index** API console opens.
-
-3. In the **listId** box, enter the list ID. Enter your subscription key, and then select **Send**.
-
- ![Term Lists API - Refresh Search Index console Response content box](images/try-terms-list-refresh-1.png)
-
-## Create a term list
-1. Go to the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f).
-
- The **Term Lists - Create** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Create page region selection](images/test-drive-region.png)
-
- The **Term Lists - Create** API console opens.
-
-3. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-4. In the **Request body** box, enter values for **Name** (for example, MyList) and **Description**.
-
- ![Term Lists - Create console Request body name and description](images/try-terms-list-create-1.png)
-
-5. Use key-value pair placeholders to assign more descriptive metadata to your list.
-
- ```json
- {
- "Name": "MyExclusionList",
- "Description": "MyListDescription",
- "Metadata":
- {
- "Category": "Competitors",
- "Type": "Exclude"
- }
- }
- ```
-
- Add list metadata as key-value pairs, and not actual terms.
-
-6. Select **Send**. Your list is created. Note the **ID** value that is associated with the new list. You need this ID for other term list management functions.
-
- ![Term Lists - Create console Response content box shows the list ID](images/try-terms-list-create-2.png)
-
-7. Add terms to MyList. In the left menu, under **Term**, select **Add Term**.
-
- The **Term - Add Term** page opens.
-
-8. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term - Add Term page region selection](images/test-drive-region.png)
-
- The **Term - Add Term** API console opens.
-
-9. In the **listId** box, enter the list ID that you generated, and select a value for **language**. Enter your subscription key, and then select **Send**.
-
- ![Term - Add Term console query parameters](images/try-terms-list-create-3.png)
-
-10. To verify that the term has been added to the list, in the left menu, select **Term**, and then select **Get All Terms**.
-
- The **Term - Get All Terms** API console opens.
-
-11. In the **listId** box, enter the list ID, and then enter your subscription key. Select **Send**.
-
-12. In the **Response content** box, verify the terms you entered.
-
- ![Term - Get All Terms console Response content box lists the terms that you entered](images/try-terms-list-create-4.png)
-
-13. Add a few more terms. Now that you have created a custom list of terms, try [scanning some text](try-text-api.md) by using the custom term list.
-
-## Delete terms and lists
-
-Deleting a term or a list is straightforward. You use the API to do the following tasks:
--- Delete a term. (**Term - Delete**)-- Delete all the terms in a list without deleting the list. (**Term - Delete All Terms**)-- Delete a list and all of its contents. (**Term Lists - Delete**)-
-This example deletes a single term.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term**, and then select **Delete**.
-
- The **Term - Delete** opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term - Delete page region selection](images/test-drive-region.png)
-
- The **Term - Delete** API console opens.
-
-3. In the **listId** box, enter the ID of the list that you want to delete a term from. This ID is the number (in our example, **122**) that is returned in the **Term Lists - Get Details** console for MyList. Enter the term and select a language.
-
- ![Term - Delete console query parameters](images/try-terms-list-delete-1.png)
-
-4. Enter your subscription key, and then select **Send**.
-
-5. To verify that the term has been deleted, use the **Term Lists - Get All** console.
-
- ![Term Lists - Get All console Response content box shows that term is deleted](images/try-terms-list-delete-2.png)
-
-## Change list information
-
-You can edit a listΓÇÖs name and description, and add metadata items.
-
-1. In the [Term List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67f), in the left menu, select **Term Lists**, and then select **Update Details**.
-
- The **Term Lists - Update Details** page opens.
-
-2. For **Open API testing console**, select the region that most closely describes your location.
-
- ![Term Lists - Update Details page region selection](images/test-drive-region.png)
-
- The **Term Lists - Update Details** API console opens.
-
-3. In the **listId** box, enter the list ID, and then enter your subscription key.
-
-4. In the **Request body** box, make your edits, and then select **Send**.
-
- ![Term Lists - Update Details console Request body edits](images/try-terms-list-change-1.png)
-
-
-## Next steps
-
-Use the REST API in your code or start with the [Term lists .NET quickstart](term-lists-quickstart-dotnet.md) to integrate with your application.
ai-services Try Text Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/try-text-api.md
- Title: Moderate text by using the Text Moderation API - Content Moderator-
-description: Test-drive text moderation by using the Text Moderation API in the online console.
-#
----- Previously updated : 01/18/2024--
-# Moderate text from the API console
-
-Use the [Text Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f) in Azure Content Moderator to scan your text content for profanity and compare it against custom and shared lists.
-
-## Get your API key
-
-Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
-
-## Navigate to the API reference
-
-Go to the [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f).
-
- The **Text - Screen** page opens.
-
-## Open the API console
-
-For **Open API testing console**, select the region that most closely describes your location.
-
- ![Text - Screen page region selection](images/test-drive-region.png)
-
- The **Text - Screen** API console opens.
-
-## Select the inputs
-
-### Parameters
-
-Select the query parameters that you want to use in your text screen. For this example, use the default value for **language**. You can also leave it blank because the operation will automatically detect the likely language as part of its execution.
-
-> [!NOTE]
-> For the **language** parameter, assign `eng` or leave it empty to see the machine-assisted **classification** response (preview feature). **This feature supports English only**.
->
-> For **profanity terms** detection, use the [ISO 639-3 code](http://www-01.sil.org/iso639-3/codes.asp) of the supported languages listed in this article, or leave it empty.
-
-For **autocorrect**, **PII**, and **classify (preview)**, select **true**. Leave the **ListId** field empty.
-
- ![Text - Screen console query parameters](images/text-api-console-inputs.png)
-
-### Content type
-
-For **Content-Type**, select the type of content you want to screen. For this example, use the default **text/plain** content type. In the **Ocp-Apim-Subscription-Key** box, enter your subscription key.
-
-### Sample text to scan
-
-In the **Request body** box, enter some text. The following example shows an intentional typo in the text.
-
-```
-Is this a grabage or <offensive word> email abcdef@abcd.com, phone: 4255550111, IP:
-255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
-```
-
-## Analyze the response
-
-The following response shows the various insights from the API. It contains potential profanity, personal data, classification (preview), and the auto-corrected version.
-
-> [!NOTE]
-> The machine-assisted 'Classification' feature is in preview and supports English only.
-
-```json
-{
- "original_text":"Is this a grabage or <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "normalized_text":" grabage <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "auto_corrected_text":"Is this a garbage or <offensive word> email abcdef@abcd.com, phone:
- 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052.",
- "status":{
- "code":3000,
- "description":"OK"
- },
- "pii":{
- "email":[
- {
- "detected":"abcdef@abcd.com",
- "sub_type":"Regular",
- "text":"abcdef@abcd.com",
- "index":32
- }
- ],
- "ssn":[
-
- ],
- "ipa":[
- {
- "sub_type":"IPV4",
- "text":"255.255.255.255",
- "index":72
- }
- ],
- "phone":[
- {
- "country_code":"US",
- "text":"6657789887",
- "index":56
- }
- ],
- "address":[
- {
- "text":"1 Microsoft Way, Redmond, WA 98052",
- "index":89
- }
- ]
- },
- "language":"eng",
- "terms":[
- {
- "index":12,
- "original_index":21,
- "list_id":0,
- "term":"<offensive word>"
- }
- ],
- "tracking_id":"WU_ibiza_65a1016d-0f67-45d2-b838-b8f373d6d52e_ContentModerator.
- F0_fe000d38-8ecd-47b5-a8b0-4764df00e3b5"
-}
-```
-
-For a detailed explanation of all sections in the JSON response, refer to the [Text moderation](text-moderation-api.md) conceptual guide.
-
-## Next steps
-
-Use the REST API in your code, or follow the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to integrate with your application.
ai-services Custom Categories Rapid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/custom-categories-rapid.md
+
+ Title: "Custom categories (rapid) in Azure AI Content Safety"
+
+description: Learn about content incidents and how you can use Azure AI Content Safety to handle them on your platform.
+#
+++++ Last updated : 04/11/2024+++
+# Custom categories (rapid)
+
+In content moderation scenarios, custom categories (rapid) is the process of identifying, analyzing, containing, eradicating, and recovering from cyber incidents that involve inappropriate or harmful content on online platforms.
+
+An incident may involve a set of emerging content patterns (text, image, or other modalities) that violate Microsoft community guidelines or the customers' own policies and expectations. These incidents need to be mitigated quickly and accurately to avoid potential live site issues or harm to users and communities.
+
+## Custom categories (rapid) API features
+
+One way to deal with emerging content incidents is to use [Blocklists](/azure/ai-services/content-safety/how-to/use-blocklist), but that only allows exact text matching and no image matching. The Azure AI Content Safety custom categories (rapid) API offers the following advanced capabilities:
+- semantic text matching using embedding search with a lightweight classifier
+- image matching with a lightweight object-tracking model and embedding search.
+
+## How it works
+
+First, you use the API to create an incident object with a description. Then you add any number of image or text samples to the incident. No training step is needed.
+
+Then, you can include your defined incident in a regular text analysis or image analysis request. The service will indicate whether the submitted content is an instance of your incident. The service can still do other content moderation tasks in the same API call.
+
+## Limitations
+
+### Language availability
+
+The text custom categories (rapid) API supports all languages that are supported by Content Safety text moderation. See [Language support](/azure/ai-services/content-safety/language-support).
+
+### Input limitations
+
+See the following table for the input limitations of the custom categories (rapid) API:
+
+| Object | Limitation |
+| : | :-- |
+| Maximum length of an incident name | 100 characters |
+| Maximum number of text/image samples per incident | 1000 |
+| Maximum size of each sample | Text: 500 characters<br>Image: 4 MBΓÇ» |
+| Maximum number of text or image incidents per resource| 100 |
+| Supported Image formats | BMP, GIF, JPEG, PNG, TIF, WEBP |
+
+### Region availability
+
+To use this API, you must create your Azure AI Content Safety resource in one of the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
+
+## Next steps
+
+Follow the how-to guide to use the Azure AI Content Safety custom categories (rapid) API.
+
+* [Use the custom categories (rapid) API](../how-to/custom-categories-rapid.md)
ai-services Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/groundedness.md
The maximum character limit for the grounding sources is 55,000 characters per A
### Regions
-To use this API, you must create your Azure AI Content Safety resource in the supported regions. Currently, it's available in the following Azure regions:
-- East US 2-- East US -- West US-- Sweden Central
+To use this API, you must create your Azure AI Content Safety resource in the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
### TPS limitations
-| Pricing Tier | Requests per 10 seconds |
-| :-- | : |
-| F0 | 10 |
-| S0 | 10 |
+See [Query rates](/azure/ai-services/content-safety/overview#query-rates).
If you need a higher rate, [contact us](mailto:contentsafetysupport@microsoft.com) to request it.
ai-services Jailbreak Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/jailbreak-detection.md
Currently, the Prompt Shields API supports the English language. While our API d
### Text length limitations
-The maximum character limit for Prompt Shields is 10,000 characters per API call, between both the user prompts and documents combines. If your input (either user prompts or documents) exceeds these character limitations, you'll encounter an error.
+The maximum character limit for Prompt Shields allows for a user prompt of up to 10,000 characters, while the document array is restricted to a maximum of 5 documents with a combined total not exceeding 10,000 characters.
+
+### Regions
+
+To use this API, you must create your Azure AI Content Safety resource in the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
### TPS limitations
-| Pricing Tier | Requests per 10 seconds |
-| :-- | :- |
-| F0 | 1000 |
-| S0 | 1000 |
+See [Query rates](/azure/ai-services/content-safety/overview#query-rates).
If you need a higher rate, please [contact us](mailto:contentsafetysupport@microsoft.com) to request it.
ai-services Custom Categories Rapid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories-rapid.md
+
+ Title: "Use the custom categories (rapid) API"
+
+description: Learn how to use the custom categories (rapid) API to mitigate harmful content incidents quickly.
+#
+++++ Last updated : 04/11/2024++++
+# Use the custom categories (rapid) API
+
+The custom categories (rapid) API lets you quickly respond to emerging harmful content incidents. You can define an incident with a few examples in a specific topic, and the service will start detecting similar content.
+
+Follow these steps to define an incident with a few examples of text content and then analyze new text content to see if it matches the incident.
+
+> [!IMPORTANT]
+> This new feature is only available in select Azure regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
+
+> [!CAUTION]
+> The sample data in this guide might contain offensive content. User discretion is advised.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
+ * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
+* Also [create a blob storage container](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) if you want to upload your images there. You can alternatively encode your images as Base64 strings and use them directly in the API calls.
+* One of the following installed:
+ * [cURL](https://curl.haxx.se/) for REST API calls.
+ * [Python 3.x](https://www.python.org/) installed
+
+<!--tbd env vars-->
+
+## Test the text custom categories (rapid) API
+
+Use the sample code in this section to create a text incident, add samples to the incident, deploy the incident, and then detect text incidents.
+
+### Create an incident object
+
+#### [cURL](#tab/curl)
+
+In the commands below, replace `<your_api_key>`, `<your_endpoint>`, and other necessary parameters with your own values.
+
+The following command creates an incident with a name and definition.
+
+```bash
+curl --location --request PATCH 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "incidentName": "<text-incident-name>",
+ "incidentDefinition": "string"
+}'
+```
+
+#### [Python](#tab/python)
+
+First, you need to install the required Python library:
+
+```bash
+pip install requests
+```
+
+Then, define the necessary variables with your own Azure resource details:
+
+```python
+import requests
+
+API_KEY = '<your_api_key>'
+ENDPOINT = '<your_endpoint>'
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': API_KEY,
+ 'Content-Type': 'application/json'
+}
+```
+
+The following command creates an incident with a name and definition.
++
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "incidentName": "<text-incident-name>",
+ "incidentDefinition": "string"
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("PATCH", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+### Add samples to the incident
+
+Use the following command to add text examples to the incident.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:addIncidentSamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "IncidentSamples": [
+ { "text": "<text-example-1>"},
+ { "text": "<text-example-2>"},
+ ...
+ ]
+}'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:addIncidentSamples?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "IncidentSamples": [
+ {
+ "text": "<text-example-1>"
+ },
+ {
+ "text": "<text-example-1>"
+ },
+ ...
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+### Deploy the incident
++
+Use the following command to deploy the incident, making it available for the analysis of new content.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:deploy?api-version=2024-02-15-preview' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:deploy?api-version=2024-02-15-preview"
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+
+```
++
+### Detect text incidents
+
+Run the following command to analyze sample text content for the incident you just deployed.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/text:detectIncidents?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "text": "<test-text>",
+ "incidentNames": [
+ "<text-incident-name>"
+ ]
+}'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/text:detectIncidents?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "text": "<test-text>",
+ "incidentNames": [
+ "<text-incident-name>"
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+## Test the image custom categories (rapid) API
+
+Use the sample code in this section to create an image incident, add samples to the incident, deploy the incident, and then detect image incidents.
+
+### Create an incident
+
+#### [cURL](#tab/curl)
+
+In the commands below, replace `<your_api_key>`, `<your_endpoint>`, and other necessary parameters with your own values.
+
+The following command creates an image incident:
++
+```bash
+curl --location --request PATCH 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "incidentName": "<image-incident-name>"
+}'
+```
+
+#### [Python](#tab/python)
+
+Make sure you've installed required Python libraries:
+
+```bash
+pip install requests
+```
+
+Define the necessary variables with your own Azure resource details:
+
+```python
+import requests
+
+API_KEY = '<your_api_key>'
+ENDPOINT = '<your_endpoint>'
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': API_KEY,
+ 'Content-Type': 'application/json'
+}
+```
+
+The following command creates an image incident:
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "incidentName": "<image-incident-name>"
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("PATCH", url, headers=headers, data=payload)
+
+print(response.text)
+
+```
++
+### Add samples to the incident
+
+Use the following command to add examples images to your incident. The image samples can be URLs pointing to images in an Azure blob storage container, or they can be Base64 strings.
++
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:addIncidentSamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "IncidentSamples": [
+ {
+ "image": {
+ "content": "<base64-data>",
+ "bloburl": "<your-blob-storage-url>.png"
+ }
+ }
+ ]
+}'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:addIncidentSamples?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "IncidentSamples": [
+ {
+ "image": {
+ "content": "<base64-data>",
+ "bloburl": "<your-blob-storage-url>/image.png"
+ }
+ }
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+
+```
++
+### Deploy the incident
+
+Use the following command to deploy the incident, making it available for the analysis of new content.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:deploy?api-version=2024-02-15-preview' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:deploy?api-version=2024-02-15-preview"
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+
+```
++
+### Detect image incidents
+
+Use the following command to upload a sample image and test it against the incident you deployed. You can either use a URL pointing to the image in an Azure blob storage container, or you can add the image data as a Base64 string.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/image:detectIncidents?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "image": {
+ "url": "<your-blob-storage-url>/image.png",
+ "content": "<base64-data>"
+ },
+ "incidentNames": [
+ "<image-incident-name>"
+ ]
+ }
+}'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/image:detectIncidents?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "image": {
+ "url": "<your-blob-storage-url>/image.png",
+ "content": "<base64-data>"
+ },
+ "incidentNames": [
+ "<image-incident-name>"
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+
+```
++
+## Other incident operations
+
+The following operations are useful for managing incidents and incident samples.
+
+### Text incidents API
+
+#### List all incidents
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/text/incidents?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/text/incidents?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Get the incident details
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Delete the incident
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location --request DELETE 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("DELETE", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### List all samples under an incident
+
+This command retrieves the unique IDs of all the samples associated with a given incident object.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>/incidentsamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>/incidentsamples?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Get an incident sample's details
+
+Use an incident sample ID to look up details about the sample.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>/incidentsamples/<your-incident-sample-id>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>/incidentsamples/<your-incident-sample-id>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Delete an incident sample
+
+Use an incident sample ID to retrieve and delete that sample.
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:removeIncidentSamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "IncidentSampleIds": [
+ "<your-incident-sample-id>"
+ ]
+}'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/text/incidents/<text-incident-name>:removeIncidentSamples?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "IncidentSampleIds": [
+ "<your-incident-sample-id>"
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+### Image incidents API
+
+#### Get the incidents list
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/image/incidents?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/image/incidents?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Get the incident details
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Delete the incident
+
+#### [cURL](#tab/curl)
+
+```bash
+curl --location --request DELETE 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("DELETE", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### List all samples under an incident
+
+This command retrieves the unique IDs of all the samples associated with a given incident object.
++
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>/incidentsamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>/incidentsamples?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Get the incident sample details
+
+Use an incident sample ID to look up details about the sample.
++
+#### [cURL](#tab/curl)
+
+```bash
+curl --location GET 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>/incidentsamples/<your-incident-sample-id>?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>/incidentsamples/<your-incident-sample-id>?api-version=2024-02-15-preview "
+
+payload = {}
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>'
+}
+
+response = requests.request("GET", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+#### Delete the incident sample
+
+Use an incident sample ID to retrieve and delete that sample.
++
+#### [cURL](#tab/curl)
+
+```bash
+curl --location 'https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:removeIncidentSamples?api-version=2024-02-15-preview ' \
+--header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \
+--header 'Content-Type: application/json' \
+--data '{
+ "IncidentSampleIds": [
+ "<your-incident-sample-id>"
+ ]
+}'
+```
+#### [Python](#tab/python)
+
+```python
+import requests
+import json
+
+url = "https://<endpoint>/contentsafety/image/incidents/<image-incident-name>:removeIncidentSamples?api-version=2024-02-15-preview "
+
+payload = json.dumps({
+ "IncidentSampleIds": [
+ "<your-incident-sample-id>"
+ ]
+})
+headers = {
+ 'Ocp-Apim-Subscription-Key': '<your-content-safety-key>',
+ 'Content-Type': 'application/json'
+}
+
+response = requests.request("POST", url, headers=headers, data=payload)
+
+print(response.text)
+```
++
+## Related content
+
+- [Custom categories (rapid) concepts](../concepts/custom-categories-rapid.md)
+- [What is Azure AI Content Safety?](../overview.md)
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
The default AI classifiers are sufficient for most content moderation needs. How
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US or West Europe), and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs. * One of the following installed: * [cURL](https://curl.haxx.se/) for REST API calls.
curl --location --request POST '<endpoint>/contentsafety/text/blocklists/<your_l
> You can add multiple blocklistItems in one API call. Make the request body a JSON array of data groups: > > ```json
-> [{
-> "description": "string",
-> "text": "bleed"
-> },
> {
-> "description": "string",
-> "text": "blood"
-> }]
+> "blocklistItems": [
+> {
+> "description": "string",
+> "text": "bleed"
+> },
+> {
+> "description": "string",
+> "text": "blood"
+> }
+> ]
+>}
> ```
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
There are different types of analysis available from this service. The following
| Prompt Shields (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) | | Groundedness detection (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) | | Protected material text detection (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
+| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories-rapid.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md) |
## Content Safety Studio
Learn how Azure AI Content Safety handles the [encryption and decryption of your
## Pricing
-Currently, Azure AI Content Safety has an **F0 and S0** pricing tier.
+Currently, Azure AI Content Safety has an **F0 and S0** pricing tier. See the Azure [pricing page](https://aka.ms/content-safety-pricing) for more information.
## Service limits
Content Safety models have been specifically trained and tested in the following
For more information, see [Language support](/azure/ai-services/content-safety/language-support).
-### Region/location
-
-To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, it is available in the following Azure regions:
-- Australia East-- Canada East-- Central US-- East US-- East US 2-- France Central-- Japan East-- North Central US-- South Central US-- Switzerland North-- UK South-- West Europe-- West US 2-- Sweden Central-
-Public preview features, such as Prompt Shields and protected material detection, are available in the following Azure regions:
-- East US-- West Europe
+### Region availability
+
+To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, the Content Safety features are available in the following Azure regions:
+
+| Region | Moderation APIs | Prompt Shields | Protected material<br>detection | Groundedness<br>detection | Incident response | Blocklists |
+|||||||--|
+| East US | ✅ | ✅| ✅ |✅ |✅ |✅ |
+| East US 2 | ✅ | | | ✅ | | |
+| West US | | | | | ✅ | | |
+| West US 2 | ✅ | | | | | |
+| Central US | ✅ | | | | | |
+| North Central US | ✅ | | | | | |
+| South Central US | ✅ | | | | | |
+| Canada East | ✅ | | | | | |
+| Switzerland North | ✅ | | | | | |
+| Sweden Central | ✅ | | |✅ |✅ | |
+| UK South | ✅ | | | | | |
+| France Central | ✅ | | | | | |
+| West Europe | ✅ | ✅ |✅ | | |✅ |
+| Japan East | ✅ | | | | | |
+| Australia East| ✅ | ✅ | | | | |
Feel free to [contact us](mailto:contentsafetysupport@microsoft.com) if you need other regions for your business. ### Query rates
-| Pricing Tier | Requests per 10 seconds (RPS) |
+| Pricing Tier | Requests per 10 seconds |
| :-- | : | | F0 | 1000 | | S0 | 1000 | +
+#### Prompt Shields
+| Pricing Tier | Requests per 10 seconds |
+| :-- | :- |
+| F0 | 1000 |
+| S0 | 1000 |
+
+#### Groundedness detection
+| Pricing Tier | Requests per 10 seconds |
+| :-- | : |
+| F0 | 50 |
+| S0 | 50 |
++ If you need a faster rate, please [contact us](mailto:contentsafetysupport@microsoft.com) to request.
ai-services Quickstart Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-groundedness.md
Follow this guide to use Azure AI Content Safety Groundedness detection to check
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US2, West US, Sweden Central), and supported pricing tier. Then select **Create**.
- * The resource takes a few minutes to deploy. After it does, go to the new resource. In the left pane, under **Resource Management**, select **API Keys and Endpoints**. Copy one of the subscription key values and endpoint to a temporary location for later use.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US, East US2, West US, Sweden Central), and supported pricing tier. Then select **Create**.
+* The resource takes a few minutes to deploy. After it does, go to the new resource. In the left pane, under **Resource Management**, select **API Keys and Endpoints**. Copy one of the subscription key values and endpoint to a temporary location for later use.
* (Optional) If you want to use the _reasoning_ feature, create an Azure OpenAI Service resource with a GPT model deployed. * [cURL](https://curl.haxx.se/) or [Python](https://www.python.org/downloads/) installed. ## Check groundedness without reasoning
-In the simple case without the _reasoning_ feature, the Groundedness detection API classifies the ungroundedness of the submitted content as `true` or `false` and provides a confidence score.
+In the simple case without the _reasoning_ feature, the Groundedness detection API classifies the ungroundedness of the submitted content as `true` or `false`.
#### [cURL](#tab/curl)
This section walks through a sample request with cURL. Paste the command below i
"groundingSources": [ "I'm 21 years old and I need to make a decision about the next two years of my life. Within a week. I currently work for a bank that requires strict sales goals to meet. IF they aren't met three times (three months) you're canned. They pay me 10/hour and it's not unheard of to get a raise in 6ish months. The issue is, **I'm not a salesperson**. That's not my personality. I'm amazing at customer service, I have the most positive customer service \"reports\" done about me in the short time I've worked here. A coworker asked \"do you ask for people to fill these out? you have a ton\". That being said, I have a job opportunity at Chase Bank as a part time teller. What makes this decision so hard is that at my current job, I get 40 hours and Chase could only offer me 20 hours/week. Drive time to my current job is also 21 miles **one way** while Chase is literally 1.8 miles from my house, allowing me to go home for lunch. I do have an apartment and an awesome roommate that I know wont be late on his portion of rent, so paying bills with 20hours a week isn't the issue. It's the spending money and being broke all the time.\n\nI previously worked at Wal-Mart and took home just about 400 dollars every other week. So I know i can survive on this income. I just don't know whether I should go for Chase as I could definitely see myself having a career there. I'm a math major likely going to become an actuary, so Chase could provide excellent opportunities for me **eventually**." ],
- "reasoning": False
+ "reasoning": false
}' ```
-1. Open a command prompt and run the cURL command.
+Open a command prompt and run the cURL command.
#### [Python](#tab/python)
Create a new Python file named _quickstart.py_. Open the new file in your prefer
"groundingSources": [ "I'm 21 years old and I need to make a decision about the next two years of my life. Within a week. I currently work for a bank that requires strict sales goals to meet. IF they aren't met three times (three months) you're canned. They pay me 10/hour and it's not unheard of to get a raise in 6ish months. The issue is, **I'm not a salesperson**. That's not my personality. I'm amazing at customer service, I have the most positive customer service \"reports\" done about me in the short time I've worked here. A coworker asked \"do you ask for people to fill these out? you have a ton\". That being said, I have a job opportunity at Chase Bank as a part time teller. What makes this decision so hard is that at my current job, I get 40 hours and Chase could only offer me 20 hours/week. Drive time to my current job is also 21 miles **one way** while Chase is literally 1.8 miles from my house, allowing me to go home for lunch. I do have an apartment and an awesome roommate that I know wont be late on his portion of rent, so paying bills with 20hours a week isn't the issue. It's the spending money and being broke all the time.\n\nI previously worked at Wal-Mart and took home just about 400 dollars every other week. So I know i can survive on this income. I just don't know whether I should go for Chase as I could definitely see myself having a career there. I'm a math major likely going to become an actuary, so Chase could provide excellent opportunities for me **eventually**." ],
- "reasoning": False
+ "reasoning": false
}) headers = { 'Ocp-Apim-Subscription-Key': '<your_subscription_key>',
Create a new Python file named _quickstart.py_. Open the new file in your prefer
-> [!TIP]
-> To test a summarization task instead of a question answering (QnA) task, use the following sample JSON body:
->
-> ```json
-> {
-> "Domain": "Medical",
-> "Task": "Summarization",
-> "Text": "Ms Johnson has been in the hospital after experiencing a stroke.",
-> "GroundingSources": ["Our patient, Ms. Johnson, presented with persistent fatigue, unexplained weight loss, and frequent night sweats. After a series of tests, she was diagnosed with HodgkinΓÇÖs lymphoma, a type of cancer that affects the lymphatic system. The diagnosis was confirmed through a lymph node biopsy revealing the presence of Reed-Sternberg cells, a characteristic of this disease. She was further staged using PET-CT scans. Her treatment plan includes chemotherapy and possibly radiation therapy, depending on her response to treatment. The medical team remains optimistic about her prognosis given the high cure rate of HodgkinΓÇÖs lymphoma."],
-> "Reasoning": false
-> }
-> ```
+To test a summarization task instead of a question answering (QnA) task, use the following sample JSON body:
+```json
+{
+ "domain": "Medical",
+ "task": "Summarization",
+ "text": "Ms Johnson has been in the hospital after experiencing a stroke.",
+ "groundingSources": ["Our patient, Ms. Johnson, presented with persistent fatigue, unexplained weight loss, and frequent night sweats. After a series of tests, she was diagnosed with HodgkinΓÇÖs lymphoma, a type of cancer that affects the lymphatic system. The diagnosis was confirmed through a lymph node biopsy revealing the presence of Reed-Sternberg cells, a characteristic of this disease. She was further staged using PET-CT scans. Her treatment plan includes chemotherapy and possibly radiation therapy, depending on her response to treatment. The medical team remains optimistic about her prognosis given the high cure rate of HodgkinΓÇÖs lymphoma."],
+ "reasoning": false
+}
+```
The following fields must be included in the URL:
The parameters in the request body are defined in this table:
| - `query` | (Optional) This represents the question in a QnA task. Character limit: 7,500. | String | | **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String | | **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array |
-| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI resources to provide an explanation. Be careful: using reasoning increases the processing time and incurs extra fees.| Boolean |
+| **reasoning** | (Optional) Specifies whether to use the reasoning feature. The default value is `false`. If `true`, you need to bring your own Azure OpenAI GPT-4 Turbo (1106-preview) resources to provide an explanation. Be careful: using reasoning increases the processing time.| Boolean |
### Interpret the API response
The JSON objects in the output are defined here:
| Name | Description | Type | | : | :-- | - | | **ungroundedDetected** | Indicates whether the text exhibits ungroundedness. | Boolean |
-| **confidenceScore** | The confidence value of the _ungrounded_ designation. The score ranges from 0 to 1. | Float |
| **ungroundedPercentage** | Specifies the proportion of the text identified as ungrounded, expressed as a number between 0 and 1, where 0 indicates no ungrounded content and 1 indicates entirely ungrounded content.| Float | | **ungroundedDetails** | Provides insights into ungrounded content with specific examples and percentages.| Array |
-| -**`Text`** | The specific text that is ungrounded. | String |
+| -**`text`** | The specific text that is ungrounded. | String |
## Check groundedness with reasoning
-The Groundedness detection API provides the option to include _reasoning_ in the API response. With reasoning enabled, the response includes a `"reasoning"` field that details specific instances and explanations for any detected ungroundedness. Be careful: using reasoning increases the processing time and incurs extra fees.
-
+The Groundedness detection API provides the option to include _reasoning_ in the API response. With reasoning enabled, the response includes a `"reasoning"` field that details specific instances and explanations for any detected ungroundedness.
### Bring your own GPT deployment
-In order to use your Azure OpenAI resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
-
-1. Enable Managed Identity for Azure AI Content Safety.
-
- Navigate to your Azure AI Content Safety instance in the Azure portal. Find the **Identity** section under the **Settings** category. Enable the system-assigned managed identity. This action grants your Azure AI Content Safety instance an identity that can be recognized and used within Azure for accessing other resources.
-
- :::image type="content" source="media/content-safety-identity.png" alt-text="Screenshot of a Content Safety identity resource in the Azure portal." lightbox="media/content-safety-identity.png":::
-
-1. Assign Role to Managed Identity.
-
- Navigate to your Azure OpenAI instance, select **Add role assignment** to start the process of assigning an Azure OpenAI role to the Azure AI Content Safety identity.
-
- :::image type="content" source="media/add-role-assignment.png" alt-text="Screenshot of adding role assignment in Azure portal.":::
+> [!TIP]
+> At the moment, we only support **Azure OpenAI GPT-4 Turbo (1106-preview)** resources and do not support other GPT types. You have the flexibility to deploy your GPT-4 Turbo (1106-preview) resources in any region. However, to minimize potential latency and avoid any geographical boundary data privacy and risk concerns, we recommend situating them in the same region as your content safety resources. For comprehensive details on data privacy, please refer to the [Data, privacy and security guidelines for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy) and [Data, privacy, and security for Azure AI Content Safety](/legal/cognitive-services/content-safety/data-privacy?context=%2Fazure%2Fai-services%2Fcontent-safety%2Fcontext%2Fcontext).
- Choose the **User** or **Contributor** role.
+In order to use your Azure OpenAI GPT4-Turbo (1106-preview) resource to enable the reasoning feature, use Managed Identity to allow your Content Safety resource to access the Azure OpenAI resource:
- :::image type="content" source="media/assigned-roles-simple.png" alt-text="Screenshot of the Azure portal with the Contributor and User roles displayed in a list." lightbox="media/assigned-roles-simple.png":::
### Make the API request
-In your request to the Groundedness detection API, set the `"Reasoning"` body parameter to `true`, and provide the other needed parameters:
+In your request to the Groundedness detection API, set the `"reasoning"` body parameter to `true`, and provide the other needed parameters:
```json {
The parameters in the request body are defined in this table:
| **text** | (Required) The LLM output text to be checked. Character limit: 7,500. | String | | **groundingSources** | (Required) Uses an array of grounding sources to validate AI-generated text. Up to 55,000 characters of grounding sources can be analyzed in a single request. | String array | | **reasoning** | (Optional) Set to `true`, the service uses Azure OpenAI resources to provide an explanation. Be careful: using reasoning increases the processing time and incurs extra fees.| Boolean |
-| **llmResource** | (Optional) If you want to use your own Azure OpenAI resources instead of our default GPT resources, add this field and include the subfields for the resources used. If you don't want to use your own resources, remove this field from the input. | String |
-| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. | Enum|
+| **llmResource** | (Required) If you want to use your own Azure OpenAI GPT4-Turbo (1106-preview) resource to enable reasoning, add this field and include the subfields for the resources used. | String |
+| - `resourceType `| Specifies the type of resource being used. Currently it only allows `AzureOpenAI`. We only support Azure OpenAI GPT-4 Turbo (1106-preview) resources and do not support other GPT types. | Enum|
| - `azureOpenAIEndpoint `| Your endpoint URL for Azure OpenAI service. | String | | - `azureOpenAIDeploymentName` | The name of the specific GPT deployment to use. | String|
After you submit your request, you'll receive a JSON response reflecting the Gro
{ "text": "12/hour.", "offset": {
- "utF8": 0,
- "utF16": 0,
+ "utf8": 0,
+ "utf16": 0,
"codePoint": 0 }, "length": {
- "utF8": 8,
- "utF16": 8,
+ "utf8": 8,
+ "utf16": 8,
"codePoint": 8 }, "reason": "None. The premise mentions a pay of \"10/hour\" but does not mention \"12/hour.\" It's neutral. "
The JSON objects in the output are defined here:
| Name | Description | Type | | : | :-- | - | | **ungroundedDetected** | Indicates whether the text exhibits ungroundedness. | Boolean |
-| **confidenceScore** | The confidence value of the _ungrounded_ designation. The score ranges from 0 to 1. | Float |
| **ungroundedPercentage** | Specifies the proportion of the text identified as ungrounded, expressed as a number between 0 and 1, where 0 indicates no ungrounded content and 1 indicates entirely ungrounded content.| Float | | **ungroundedDetails** | Provides insights into ungrounded content with specific examples and percentages.| Array |
-| -**`Text`** | The specific text that is ungrounded. | String |
+| -**`text`** | The specific text that is ungrounded. | String |
| -**`offset`** | An object describing the position of the ungrounded text in various encoding. | String | | - `offset > utf8` | The offset position of the ungrounded text in UTF-8 encoding. | Integer | | - `offset > utf16` | The offset position of the ungrounded text in UTF-16 encoding. | Integer |
The JSON objects in the output are defined here:
| - `length > utf8` | The length of the ungrounded text in UTF-8 encoding. | Integer | | - `length > utf16` | The length of the ungrounded text in UTF-16 encoding. | Integer | | - `length > codePoint` | The length of the ungrounded text in terms of Unicode code points. |Integer |
-| -**`Reason`** | Offers explanations for detected ungroundedness. | String |
+| -**`reason`** | Offers explanations for detected ungroundedness. | String |
## Clean up resources
ai-services Quickstart Jailbreak https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-jailbreak.md
Follow this guide to use Azure AI Content Safety Prompt Shields to check your la
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US or West Europe), and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs. * [cURL](https://curl.haxx.se/) installed
ai-services Quickstart Protected Material https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-protected-material.md
The protected material text describes language that matches known text content (
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US or West Europe), and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see [Region availability](/azure/ai-services/content-safety/overview#region-availability)), and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs. * [cURL](https://curl.haxx.se/) installed
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## May 2024
++
+### Custom categories (rapid) API
+
+The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories (rapid)](./concepts/custom-categories-rapid.md) to learn more.
+ ## March 2024 ### Prompt Shields public preview
The new Jailbreak risk detection and Protected material detection APIs let you m
- Jailbreak risk detection scans text for the risk of a [jailbreak attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) - Protected material text detection scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)
-Jailbreak risk and Protected material detection are available in the East US and West Europe Azure regions.
+Jailbreak risk and Protected material detection are only available in select regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
## October 2023
ai-services Copy Move Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/copy-move-projects.md
After you've created and trained a Custom Vision project, you may want to copy your project to another resource. If your app or business depends on a Custom Vision project, we recommend you copy your model to another Custom Vision account in another region. Then if a regional outage occurs, you can access your project in the region where it was copied.
-The **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** and **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service, like the [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) for Visual Studio Code, to issue the requests.
+The **[ExportProject](/rest/api/customvision/training/projects/export?view=rest-customvision-training-v3.3&tabs=HTTP)** and **[ImportProject](/rest/api/customvision/training/projects/import?view=rest-customvision-training-v3.3&tabs=HTTP)** APIs enable this scenario by allowing you to copy projects from one Custom Vision account into others. This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service, like the [REST Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) for Visual Studio Code, to issue the requests.
> [!TIP] > For an example of this scenario using the Python client library, see the [Move Custom Vision Project](https://github.com/Azure-Samples/custom-vision-move-project/tree/master/) repository on GitHub.
The process for copying a project consists of the following steps:
## Get the project ID
-First call **[GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddead)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
+First call **[GetProjects](/rest/api/customvision/training/projects/get?view=rest-customvision-training-v3.3&tabs=HTTP)** to see a list of your existing Custom Vision projects and their IDs. Use the training key and endpoint of your source account.
```curl curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects"
You'll get a `200\OK` response with a list of projects and their metadata in the
## Export the project
-Call **[ExportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)** using the project ID and your source training key and endpoint.
+Call **[ExportProject](/rest/api/customvision/training/projects/export?view=rest-customvision-training-v3.3&tabs=HTTP)** using the project ID and your source training key and endpoint.
```curl curl -v -X GET "{endpoint}/customvision/v3.3/Training/projects/{projectId}/export"
You'll get a `200/OK` response with metadata about the exported project and a re
## Import the project
-Call **[ImportProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee3)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
+Call **[ImportProject](/rest/api/customvision/training/projects/import?view=rest-customvision-training-v3.3&tabs=HTTP)** using your target training key and endpoint, along with the reference token. You can also give your project a name in its new account.
```curl curl -v -G -X POST "{endpoint}/customvision/v3.3/Training/projects/import"
ai-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-model-python.md
The results of running the image tensor through the model will then need to be m
Next, learn how to wrap your model into a mobile application: * [Use your exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
-* [Use your exported CoreML model in an Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
+* [Use your exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
ai-services Iot Visual Alerts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/iot-visual-alerts-tutorial.md
Follow these steps to get the IoT Visual Alerts app running on your PC or IoT de
If you're running the app on your PC, select **Local Machine** for the target device in Visual Studio, and select **x64** or **x86** for the target platform. Then press F5 to run the program. The app should start and display the live feed from the camera and a status message.
-If you're deploying to a IoT device with an ARM processor, you'll need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
+If you're deploying to an IoT device with an ARM processor, you'll need to select **ARM** as the target platform and **Remote Machine** as the target device. Provide the IP address of your device when prompted (it must be on the same network as your PC). You can get the IP Address from the Windows IoT default app once you boot the device and connect it to the network. Press F5 to run the program.
When you run the app for the first time, it won't have any knowledge of visual states. It will display a status message that no model is available.
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/role-based-access-control.md
Azure RBAC can be assigned to a Custom Vision resource. To grant access to an Az
1. On the **Members** tab, select a user, group, service principal, or managed identity. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## Custom Vision role types
ai-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/select-domain.md
This guide shows you how to select a domain for your project in the Custom Vision Service.
-From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab). Or, use the table below.
+From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](/rest/api/customvision/training/domains/list?view=rest-customvision-training-v3.3&tabs=HTTP). Or, use the table below.
## Image Classification domains
ai-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/storage-integration.md
Next, go to your storage resource in the Azure portal. Go to the **Access contro
- If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete. - If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
-For help with role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For help with role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
### Get integration URLs
Now that you have the integration URLs, you can create a new Custom Vision proje
#### [Create a new project](#tab/create)
-When you call the [CreateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
+When you call the [CreateProject](/rest/api/customvision/training/projects/create?view=rest-customvision-training-v3.3&tabs=HTTP) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
```curl curl -v -X POST "{endpoint}/customvision/v3.3/Training/projects?exportModelContainerUri={inputUri}&notificationQueueUri={inputUri}&name={inputName}"
If you receive a `200/OK` response, that means the URLs have been set up success
#### [Update an existing project](#tab/update)
-To update an existing project with Azure storage feature integration, call the [UpdateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
+To update an existing project with Azure storage feature integration, call the [UpdateProject](/rest/api/customvision/training/projects/update?view=rest-customvision-training-v3.3&tabs=HTTP) API, using the ID of the project you want to update.
```curl curl -v -X PATCH "{endpoint}/customvision/v3.3/Training/projects/{projectId}"
In your notification queue, you should see a test notification in the following
## Get event notifications
-When you're ready, call the [TrainProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc7548b571998fddee1) API on your project to do an ordinary training operation.
+When you're ready, call the [TrainProject](/rest/api/customvision/training/projects/train?view=rest-customvision-training-v3.3&tabs=HTTP) API on your project to do an ordinary training operation.
In your Storage notification queue, you'll receive a notification once training finishes:
The `"trainingStatus"` field may be either `"TrainingCompleted"` or `"TrainingFa
## Get model export backups
-When you're ready, call the [ExportIteration](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddece) API to export a trained model into a specified platform.
+When you're ready, call the [ExportIteration](/rest/api/customvision/training/iterations/export?view=rest-customvision-training-v3.3&tabs=HTTP) API to export a trained model into a specified platform.
In your designated storage container, a backup copy of the exported model will appear. The blob name will have the format:
The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"`
## Next steps In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
-* [REST API reference documentation (training)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb3)
-* [REST API reference documentation (prediction)](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.1/operations/5eb37d24548b571998fde5f3)
+* [REST API reference documentation (training)](/rest/api/customvision/training/operation-groups?view=rest-customvision-training-v3.3)
+* [REST API reference documentation (prediction)](/rest/api/customvision/prediction/operation-groups?view=rest-customvision-prediction-v3.1)
ai-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/use-prediction-api.md
After you've trained your model, you can test it programmatically by submitting images to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. > [!NOTE]
-> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
+> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](/rest/api/customvision/prediction/operation-groups?view=rest-customvision-prediction-v3.1).
## Setup
ai-services Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/disable-local-auth.md
You can use PowerShell to determine whether the local authentication policy is c
## Re-enable local authentication
-To enable local authentication, execute the PowerShell cmdlet **[Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount)** with the parameter `-DisableLocalAuth false`.  Allow a few minutes for the service to accept the change to allow local authentication requests.
+To enable local authentication, execute the PowerShell cmdlet **[Set-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/set-azcognitiveservicesaccount)** with the parameter `-DisableLocalAuthΓÇ»$false`.ΓÇ» Allow a few minutes for the service to accept the change to allow local authentication requests.
## Next steps - [Authenticate requests to Azure AI services](./authentication.md)
ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-accuracy-confidence.md
- ignite-2023 Previously updated : 02/29/2024 Last updated : 04/16/2023
Field confidence indicates an estimated probability between 0 and 1 that the pre
## Interpret accuracy and confidence scores for custom models When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
-1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembleds documents in the training dataset. When the document type confidence is low, this is indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is re-trained, it should be better equipped to handl that class of variations.
-2. **Field level confidence**: Each labled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating the confidence you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the OCR results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
-3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words, each word has an associated span and confidence. Spans from the custom field extracted values will match the spans of the extracted words.
-4. **Selection mark confidence score**: The pages array also contains an array of selection marks, each selection mark has a confidence score representing the confidence of the seletion mark and selection state detection. When a labeled field is a selection mark, the custom field selection confidence combined with the selection mark confidence is an accurate representation of the overall confidence that the field was extracted correctly.
+
+1. **Document type confidence score**: The document type confidence is an indicator of closely the analyzed document resembles documents in the training dataset. When the document type confidence is low, it's indicative of template or structural variations in the analyzed document. To improve the document type confidence, label a document with that specific variation and add it to your training dataset. Once the model is retrained, it should be better equipped to handle that class of variations.
+2. **Field level confidence**: Each labeled field extracted has an associated confidence score. This score reflects the model's confidence on the position of the value extracted. While evaluating confidence scores, you should also look at the underlying extraction confidence to generate a comprehensive confidence for the extracted result. Evaluate the `OCR` results for text extraction or selection marks depending on the field type to generate a composite confidence score for the field.
+3. **Word confidence score** Each word extracted within the document has an associated confidence score. The score represents the confidence of the transcription. The pages array contains an array of words and each word has an associated span and confidence score. Spans from the custom field extracted values match the spans of the extracted words.
+4. **Selection mark confidence score**: The pages array also contains an array of selection marks. Each selection mark has a confidence score representing the confidence of the selection mark and selection state detection. When a labeled field has a selection mark, the custom field selection combined with the selection mark confidence is an accurate representation of overall confidence accuracy.
The following table demonstrates how to interpret both the accuracy and confidence scores to measure your custom model's performance.
The following table demonstrates how to interpret both the accuracy and confiden
## Table, row, and cell confidence
-With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row and cell scores:
+With the addition of table, row and cell confidence with the ```2024-02-29-preview``` API, here are some common questions that should help with interpreting the table, row, and cell scores:
**Q:** Is it possible to see a high confidence score for cells, but a low confidence score for the row?<br>
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.1.0'
monikerRange: '>=doc-intel-3.1.0'
:::moniker range=">=doc-intel-3.1.0"
+## Capabilities
+ Document Intelligence supports more sophisticated and modular analysis capabilities. Use the add-on features to extend the results to include more features extracted from your documents. Some add-on features incur an extra cost. These optional features can be enabled and disabled depending on the scenario of the document extraction. To enable a feature, add the associated feature name to the `features` query string property. You can enable more than one add-on feature on a request by providing a comma-separated list of features. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases. * [`ocrHighResolution`](#high-resolution-extraction)
Document Intelligence supports more sophisticated and modular analysis capabilit
> [!NOTE] >
-> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#model-analysis-features).
+> * Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#model-analysis-features).
+>
+> * Add-on capabilities are currently not supported for Microsoft Office file types.
The following add-on capabilities are available for`2024-02-29-preview`, `2024-02-29-preview`, and later releases:
The following add-on capabilities are available for`2024-02-29-preview`, `2024-0
::: moniker-end
-|Add-on Capability| Add-On/Free|[2024-02-29-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+## Version availability
+
+|Add-on Capability| Add-On/Free|[2024-02-29-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|
|-|--||--||| |Font property extraction|Add-On| ✔️| ✔️| n/a| n/a| |Formula extraction|Add-On| ✔️| ✔️| n/a| n/a|
The following add-on capabilities are available for`2024-02-29-preview`, `2024-0
|Key value pairs|Free| ✔️|n/a|n/a| n/a| |Query fields|Add-On*| ✔️|n/a|n/a| n/a|
+✱ Add-On - Query fields are priced differently than the other add-on features. See [pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/) for details.
+
+## Supported file formats
+
+* `PDF`
-Add-On* - Query fields are priced differently than the other add-on features. See [pricing](https://azure.microsoft.com/pricing/details/ai-document-intelligence/) for details.
+* Images: `JPEG`/`JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`
+
+✱ Microsoft Office files are currently not supported.
## High resolution extraction The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes, and orientations. Moreover, the text can be broken into separate parts or connected with other symbols. Document Intelligence now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability.
-### REST API
- ::: moniker range="doc-intel-4.0.0"
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=ocrHighResolution ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-highres.png?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.OCR_HIGH_RESOLUTION], # Specify which add-on capabilities to enable.
+)
+result: AnalyzeResult = poller.result()
+
+# [START analyze_with_highres]
+if result.styles and any([style.is_handwritten for style in result.styles]):
+ print("Document contains handwritten content")
+else:
+ print("Document does not contain handwritten content")
+
+for page in result.pages:
+ print(f"-Analyzing layout from page #{page.page_number}-")
+ print(f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}")
+
+ if page.lines:
+ for line_idx, line in enumerate(page.lines):
+ words = get_words(page, line)
+ print(
+ f"...Line # {line_idx} has word count {len(words)} and text '{line.content}' "
+ f"within bounding polygon '{line.polygon}'"
+ )
+
+ for word in words:
+ print(f"......Word '{word.content}' has a confidence of {word.confidence}")
+
+ if page.selection_marks:
+ for selection_mark in page.selection_marks:
+ print(
+ f"Selection mark is '{selection_mark.state}' within bounding polygon "
+ f"'{selection_mark.polygon}' and has a confidence of {selection_mark.confidence}"
+ )
+
+if result.tables:
+ for table_idx, table in enumerate(result.tables):
+ print(f"Table # {table_idx} has {table.row_count} rows and " f"{table.column_count} columns")
+ if table.bounding_regions:
+ for region in table.bounding_regions:
+ print(f"Table # {table_idx} location on page: {region.page_number} is {region.polygon}")
+ for cell in table.cells:
+ print(f"...Cell[{cell.row_index}][{cell.column_index}] has text '{cell.content}'")
+ if cell.bounding_regions:
+ for region in cell.bounding_regions:
+ print(f"...content on page {region.page_number} is within bounding polygon '{region.polygon}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_highres.py)
+### [Output](#tab/output)
+```json
+"styles": [true],
+"pages": [
+ {
+ "page_number": 1,
+ "width": 1000,
+ "height": 800,
+ "unit": "px",
+ "lines": [
+ {
+ "line_idx": 1,
+ "content": "This",
+ "polygon": [10, 20, 30, 40],
+ "words": [
+ {
+ "content": "This",
+ "confidence": 0.98
+ }
+ ]
+ }
+ ],
+ "selection_marks": [
+ {
+ "state": "selected",
+ "polygon": [50, 60, 70, 80],
+ "confidence": 0.91
+ }
+ ]
+ }
+],
+"tables": [
+ {
+ "table_idx": 1,
+ "row_count": 3,
+ "column_count": 4,
+ "bounding_regions": [
+ {
+ "page_number": 1,
+ "polygon": [100, 200, 300, 400]
+ }
+ ],
+ "cells": [
+ {
+ "row_index": 1,
+ "column_index": 1,
+ "content": "Content 1",
+ "bounding_regions": [
+ {
+ "page_number": 1,
+ "polygon": [110, 210, 310, 410]
+ }
+ ]
+ }
+ ]
+ }
+]
+
+```
+
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=ocrHighResolution ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+url = "(https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-highres.png?raw=true"
+poller = document_analysis_client.begin_analyze_document_from_url(
+ "prebuilt-layout", document_url=url, features=[AnalysisFeature.OCR_HIGH_RESOLUTION] # Specify which add-on capabilities to enable.
+)
+result = poller.result()
+
+# [START analyze_with_highres]
+if any([style.is_handwritten for style in result.styles]):
+ print("Document contains handwritten content")
+else:
+ print("Document does not contain handwritten content")
+
+for page in result.pages:
+ print(f"-Analyzing layout from page #{page.page_number}-")
+ print(
+ f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}"
+ )
+
+ for line_idx, line in enumerate(page.lines):
+ words = line.get_words()
+ print(
+ f"...Line # {line_idx} has word count {len(words)} and text '{line.content}' "
+ f"within bounding polygon '{format_polygon(line.polygon)}'"
+ )
+
+ for word in words:
+ print(
+ f"......Word '{word.content}' has a confidence of {word.confidence}"
+ )
+
+ for selection_mark in page.selection_marks:
+ print(
+ f"Selection mark is '{selection_mark.state}' within bounding polygon "
+ f"'{format_polygon(selection_mark.polygon)}' and has a confidence of {selection_mark.confidence}"
+ )
+
+for table_idx, table in enumerate(result.tables):
+ print(
+ f"Table # {table_idx} has {table.row_count} rows and "
+ f"{table.column_count} columns"
+ )
+ for region in table.bounding_regions:
+ print(
+ f"Table # {table_idx} location on page: {region.page_number} is {format_polygon(region.polygon)}"
+ )
+ for cell in table.cells:
+ print(
+ f"...Cell[{cell.row_index}][{cell.column_index}] has text '{cell.content}'"
+ )
+ for region in cell.bounding_regions:
+ print(
+ f"...content on page {region.page_number} is within bounding polygon '{format_polygon(region.polygon)}'"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities/sample_analyze_addon_highres.py)
+### [Output](#tab/output)
+```json
+"styles": [true],
+"pages": [
+ {
+ "page_number": 1,
+ "width": 1000,
+ "height": 800,
+ "unit": "px",
+ "lines": [
+ {
+ "line_idx": 1,
+ "content": "This",
+ "polygon": [10, 20, 30, 40],
+ "words": [
+ {
+ "content": "This",
+ "confidence": 0.98
+ }
+ ]
+ }
+ ],
+ "selection_marks": [
+ {
+ "state": "selected",
+ "polygon": [50, 60, 70, 80],
+ "confidence": 0.91
+ }
+ ]
+ }
+],
+"tables": [
+ {
+ "table_idx": 1,
+ "row_count": 3,
+ "column_count": 4,
+ "bounding_regions": [
+ {
+ "page_number": 1,
+ "polygon": [100, 200, 300, 400]
+ }
+ ],
+ "cells": [
+ {
+ "row_index": 1,
+ "column_index": 1,
+ "content": "Content 1",
+ "bounding_regions": [
+ {
+ "page_number": 1,
+ "polygon": [110, 210, 310, 410]
+ }
+ ]
+ }
+ ]
+ }
+]
+
+```
+ ## Formula extraction
The `ocr.formula` capability extracts all identified formulas, such as mathemati
> [!NOTE] > The `confidence` score is hard-coded.
- ```json
- "content": ":formula:",
- "pages": [
- {
- "pageNumber": 1,
- "formulas": [
- {
- "kind": "inline",
- "value": "\\frac { \\partial a } { \\partial b }",
- "polygon": [...],
- "span": {...},
- "confidence": 0.99
- },
- {
- "kind": "display",
- "value": "y = a \\times b + a \\times c",
- "polygon": [...],
- "span": {...},
- "confidence": 0.99
- }
- ]
- }
- ]
- ```
-
- ### REST API
- ::: moniker range="doc-intel-4.0.0"
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=formulas ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/layout-formulas.png?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.FORMULAS], # Specify which add-on capabilities to enable
+)
+result: AnalyzeResult = poller.result()
+
+# [START analyze_formulas]
+for page in result.pages:
+ print(f"-Formulas detected from page #{page.page_number}-")
+ if page.formulas:
+ inline_formulas = [f for f in page.formulas if f.kind == "inline"]
+ display_formulas = [f for f in page.formulas if f.kind == "display"]
+
+ # To learn the detailed concept of "polygon" in the following content, visit: https://aka.ms/bounding-region
+ print(f"Detected {len(inline_formulas)} inline formulas.")
+ for formula_idx, formula in enumerate(inline_formulas):
+ print(f"- Inline #{formula_idx}: {formula.value}")
+ print(f" Confidence: {formula.confidence}")
+ print(f" Bounding regions: {formula.polygon}")
+
+ print(f"\nDetected {len(display_formulas)} display formulas.")
+ for formula_idx, formula in enumerate(display_formulas):
+ print(f"- Display #{formula_idx}: {formula.value}")
+ print(f" Confidence: {formula.confidence}")
+ print(f" Bounding regions: {formula.polygon}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_formulas.py)
+### [Output](#tab/output)
+```json
+"content": ":formula:",
+ "pages": [
+ {
+ "pageNumber": 1,
+ "formulas": [
+ {
+ "kind": "inline",
+ "value": "\\frac { \\partial a } { \\partial b }",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ },
+ {
+ "kind": "display",
+ "value": "y = a \\times b + a \\times c",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ }
+ ]
+ }
+ ]
+```
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=formulas ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+url = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/layout-formulas.png?raw=true"
+poller = document_analysis_client.begin_analyze_document_from_url(
+ "prebuilt-layout", document_url=url, features=[AnalysisFeature.FORMULAS] # Specify which add-on capabilities to enable
+)
+result = poller.result()
+
+# [START analyze_formulas]
+for page in result.pages:
+ print(f"-Formulas detected from page #{page.page_number}-")
+ inline_formulas = [f for f in page.formulas if f.kind == "inline"]
+ display_formulas = [f for f in page.formulas if f.kind == "display"]
+
+ print(f"Detected {len(inline_formulas)} inline formulas.")
+ for formula_idx, formula in enumerate(inline_formulas):
+ print(f"- Inline #{formula_idx}: {formula.value}")
+ print(f" Confidence: {formula.confidence}")
+ print(f" Bounding regions: {format_polygon(formula.polygon)}")
+
+ print(f"\nDetected {len(display_formulas)} display formulas.")
+ for formula_idx, formula in enumerate(display_formulas):
+ print(f"- Display #{formula_idx}: {formula.value}")
+ print(f" Confidence: {formula.confidence}")
+ print(f" Bounding regions: {format_polygon(formula.polygon)}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities/sample_analyze_addon_formulas.py)
+### [Output](#tab/output)
+```json
+ "content": ":formula:",
+ "pages": [
+ {
+ "pageNumber": 1,
+ "formulas": [
+ {
+ "kind": "inline",
+ "value": "\\frac { \\partial a } { \\partial b }",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ },
+ {
+ "kind": "display",
+ "value": "y = a \\times b + a \\times c",
+ "polygon": [...],
+ "span": {...},
+ "confidence": 0.99
+ }
+ ]
+ }
+ ]
+```
+ ## Font property extraction The `ocr.font` capability extracts all font properties of text extracted in the `styles` collection as a top-level object under `content`. Each style object specifies a single font property, the text span it applies to, and its corresponding confidence score. The existing style property is extended with more font properties such as `similarFontFamily` for the font of the text, `fontStyle` for styles such as italic and normal, `fontWeight` for bold or normal, `color` for color of the text, and `backgroundColor` for color of the text bounding box.
- ```json
- "content": "Foo bar",
- "styles": [
- {
- "similarFontFamily": "Arial, sans-serif",
- "spans": [ { "offset": 0, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "similarFontFamily": "Times New Roman, serif",
- "spans": [ { "offset": 4, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "fontStyle": "italic",
- "spans": [ { "offset": 1, "length": 2 } ],
- "confidence": 0.98
- },
- {
- "fontWeight": "bold",
- "spans": [ { "offset": 2, "length": 3 } ],
- "confidence": 0.98
- },
- {
- "color": "#FF0000",
- "spans": [ { "offset": 4, "length": 2 } ],
- "confidence": 0.98
- },
- {
- "backgroundColor": "#00FF00",
- "spans": [ { "offset": 5, "length": 2 } ],
- "confidence": 0.98
- }
- ]
- ```
-
-### REST API
- ::: moniker range="doc-intel-4.0.0"
+### [REST API](#tab/rest-api)
+ ```bash {your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=styleFont ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/receipt/receipt-with-tips.png?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.STYLE_FONT] # Specify which add-on capabilities to enable.
+)
+result: AnalyzeResult = poller.result()
+
+# [START analyze_fonts]
+# DocumentStyle has the following font related attributes:
+similar_font_families = defaultdict(list) # e.g., 'Arial, sans-serif
+font_styles = defaultdict(list) # e.g, 'italic'
+font_weights = defaultdict(list) # e.g., 'bold'
+font_colors = defaultdict(list) # in '#rrggbb' hexadecimal format
+font_background_colors = defaultdict(list) # in '#rrggbb' hexadecimal format
+
+if result.styles and any([style.is_handwritten for style in result.styles]):
+ print("Document contains handwritten content")
+else:
+ print("Document does not contain handwritten content")
+ return
+
+print("\n-Fonts styles detected in the document-")
+
+# Iterate over the styles and group them by their font attributes.
+for style in result.styles:
+ if style.similar_font_family:
+ similar_font_families[style.similar_font_family].append(style)
+ if style.font_style:
+ font_styles[style.font_style].append(style)
+ if style.font_weight:
+ font_weights[style.font_weight].append(style)
+ if style.color:
+ font_colors[style.color].append(style)
+ if style.background_color:
+ font_background_colors[style.background_color].append(style)
+
+print(f"Detected {len(similar_font_families)} font families:")
+for font_family, styles in similar_font_families.items():
+ print(f"- Font family: '{font_family}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_styles)} font styles:")
+for font_style, styles in font_styles.items():
+ print(f"- Font style: '{font_style}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_weights)} font weights:")
+for font_weight, styles in font_weights.items():
+ print(f"- Font weight: '{font_weight}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_colors)} font colors:")
+for font_color, styles in font_colors.items():
+ print(f"- Font color: '{font_color}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_background_colors)} font background colors:")
+for font_background_color, styles in font_background_colors.items():
+ print(f"- Font background color: '{font_background_color}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_fonts.py)
+### [Output](#tab/output)
+```json
+"content": "Foo bar",
+"styles": [
+ {
+ "similarFontFamily": "Arial, sans-serif",
+ "spans": [ { "offset": 0, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "similarFontFamily": "Times New Roman, serif",
+ "spans": [ { "offset": 4, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontStyle": "italic",
+ "spans": [ { "offset": 1, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontWeight": "bold",
+ "spans": [ { "offset": 2, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "color": "#FF0000",
+ "spans": [ { "offset": 4, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "backgroundColor": "#00FF00",
+ "spans": [ { "offset": 5, "length": 2 } ],
+ "confidence": 0.98
+ }
+ ]
+```
++
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=styleFont ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+url = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/receipt/receipt-with-tips.png?raw=true"
+poller = document_analysis_client.begin_analyze_document_from_url(
+ "prebuilt-layout", document_url=url, features=[AnalysisFeature.STYLE_FONT] # Specify which add-on capabilities to enable.
+)
+result = poller.result()
+
+# [START analyze_fonts]
+# DocumentStyle has the following font related attributes:
+similar_font_families = defaultdict(list) # e.g., 'Arial, sans-serif
+font_styles = defaultdict(list) # e.g, 'italic'
+font_weights = defaultdict(list) # e.g., 'bold'
+font_colors = defaultdict(list) # in '#rrggbb' hexadecimal format
+font_background_colors = defaultdict(list) # in '#rrggbb' hexadecimal format
+
+if any([style.is_handwritten for style in result.styles]):
+ print("Document contains handwritten content")
+else:
+ print("Document does not contain handwritten content")
+
+print("\n-Fonts styles detected in the document-")
+
+# Iterate over the styles and group them by their font attributes.
+for style in result.styles:
+ if style.similar_font_family:
+ similar_font_families[style.similar_font_family].append(style)
+ if style.font_style:
+ font_styles[style.font_style].append(style)
+ if style.font_weight:
+ font_weights[style.font_weight].append(style)
+ if style.color:
+ font_colors[style.color].append(style)
+ if style.background_color:
+ font_background_colors[style.background_color].append(style)
+
+print(f"Detected {len(similar_font_families)} font families:")
+for font_family, styles in similar_font_families.items():
+ print(f"- Font family: '{font_family}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_styles)} font styles:")
+for font_style, styles in font_styles.items():
+ print(f"- Font style: '{font_style}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_weights)} font weights:")
+for font_weight, styles in font_weights.items():
+ print(f"- Font weight: '{font_weight}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_colors)} font colors:")
+for font_color, styles in font_colors.items():
+ print(f"- Font color: '{font_color}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+
+print(f"\nDetected {len(font_background_colors)} font background colors:")
+for font_background_color, styles in font_background_colors.items():
+ print(f"- Font background color: '{font_background_color}'")
+ print(f" Text: '{get_styled_text(styles, result.content)}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities/sample_analyze_addon_fonts.py)
+
+### [Output](#tab/output)
+```json
+"content": "Foo bar",
+"styles": [
+ {
+ "similarFontFamily": "Arial, sans-serif",
+ "spans": [ { "offset": 0, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "similarFontFamily": "Times New Roman, serif",
+ "spans": [ { "offset": 4, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontStyle": "italic",
+ "spans": [ { "offset": 1, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "fontWeight": "bold",
+ "spans": [ { "offset": 2, "length": 3 } ],
+ "confidence": 0.98
+ },
+ {
+ "color": "#FF0000",
+ "spans": [ { "offset": 4, "length": 2 } ],
+ "confidence": 0.98
+ },
+ {
+ "backgroundColor": "#00FF00",
+ "spans": [ { "offset": 5, "length": 2 } ],
+ "confidence": 0.98
+ }
+ ]
+```
+ ## Barcode property extraction
The `ocr.barcode` capability extracts all identified barcodes in the `barcodes`
| `ITF` |:::image type="content" source="media/barcodes/interleaved-two-five.png" alt-text="Screenshot of the interleaved-two-of-five barcode (ITF).":::| | `Data Matrix` |:::image type="content" source="media/barcodes/datamatrix.gif" alt-text="Screenshot of the Data Matrix.":::|
-### REST API
- ::: moniker range="doc-intel-4.0.0"-
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=barcodes ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-barcodes.jpg?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-read",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.BARCODES] # Specify which add-on capabilities to enable.
+)
+result: AnalyzeResult = poller.result()
+
+# [START analyze_barcodes]
+# Iterate over extracted barcodes on each page.
+for page in result.pages:
+ print(f"-Barcodes detected from page #{page.page_number}-")
+ if page.barcodes:
+ print(f"Detected {len(page.barcodes)} barcodes:")
+ for barcode_idx, barcode in enumerate(page.barcodes):
+ print(f"- Barcode #{barcode_idx}: {barcode.value}")
+ print(f" Kind: {barcode.kind}")
+ print(f" Confidence: {barcode.confidence}")
+ print(f" Bounding regions: {barcode.polygon}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_barcodes.py)
+### [Output](#tab/output)
+```json
+-Barcodes detected from page #1-
+Detected 2 barcodes:
+- Barcode #0: 123456
+ Kind: QRCode
+ Confidence: 0.95
+ Bounding regions: [10.5, 20.5, 30.5, 40.5]
+- Barcode #1: 789012
+ Kind: QRCode
+ Confidence: 0.98
+ Bounding regions: [50.5, 60.5, 70.5, 80.5]
+```
+ :::moniker-end :::moniker range="doc-intel-3.1.0"
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=barcodes ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+url = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-barcodes.jpg?raw=true"
+poller = document_analysis_client.begin_analyze_document_from_url(
+ "prebuilt-layout", document_url=url, features=[AnalysisFeature.BARCODES] # Specify which add-on capabilities to enable.
+)
+result = poller.result()
+
+# [START analyze_barcodes]
+# Iterate over extracted barcodes on each page.
+for page in result.pages:
+ print(f"-Barcodes detected from page #{page.page_number}-")
+ print(f"Detected {len(page.barcodes)} barcodes:")
+ for barcode_idx, barcode in enumerate(page.barcodes):
+ print(f"- Barcode #{barcode_idx}: {barcode.value}")
+ print(f" Kind: {barcode.kind}")
+ print(f" Confidence: {barcode.confidence}")
+ print(f" Bounding regions: {format_polygon(barcode.polygon)}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities/sample_analyze_addon_barcodes.py)
+### [Output](#tab/output)
+```json
+-Barcodes detected from page #1-
+Detected 2 barcodes:
+- Barcode #0: 123456
+ Kind: QRCode
+ Confidence: 0.95
+ Bounding regions: [10.5, 20.5, 30.5, 40.5]
+- Barcode #1: 789012
+ Kind: QRCode
+ Confidence: 0.98
+ Bounding regions: [50.5, 60.5, 70.5, 80.5]
+```
+ ## Language detection Adding the `languages` feature to the `analyzeResult` request predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`. +
+### [REST API](#tab/rest-api)
+```bash
+{your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=languages
+```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-fonts_and_languages.png?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.LANGUAGES] # Specify which add-on capabilities to enable.
+)
+result: AnalyzeResult = poller.result()
+
+# [START analyze_languages]
+print("-Languages detected in the document-")
+if result.languages:
+ print(f"Detected {len(result.languages)} languages:")
+ for lang_idx, lang in enumerate(result.languages):
+ print(f"- Language #{lang_idx}: locale '{lang.locale}'")
+ print(f" Confidence: {lang.confidence}")
+ print(
+ f" Text: '{','.join([result.content[span.offset : span.offset + span.length] for span in lang.spans])}'"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_languages.py)
+
+### [Output](#tab/output)
```json "languages": [ {
Adding the `languages` feature to the `analyzeResult` request predicts the detec
}, ] ```-
-### REST API
--
-```bash
-{your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=languages
-```
-+ :::moniker-end :::moniker range="doc-intel-3.1.0"
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&features=languages ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+url = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/add-on/add-on-fonts_and_languages.png?raw=true"
+poller = document_analysis_client.begin_analyze_document_from_url(
+ "prebuilt-layout", document_url=url, features=[AnalysisFeature.LANGUAGES] # Specify which add-on capabilities to enable.
+)
+result = poller.result()
+
+# [START analyze_languages]
+print("-Languages detected in the document-")
+print(f"Detected {len(result.languages)} languages:")
+for lang_idx, lang in enumerate(result.languages):
+ print(f"- Language #{lang_idx}: locale '{lang.locale}'")
+ print(f" Confidence: {lang.confidence}")
+ print(f" Text: '{','.join([result.content[span.offset : span.offset + span.length] for span in lang.spans])}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities/sample_analyze_addon_languages.py)
+### [Output](#tab/output)
+```json
+"languages": [
+ {
+ "spans": [
+ {
+ "offset": 0,
+ "length": 131
+ }
+ ],
+ "locale": "en",
+ "confidence": 0.7
+ },
+]
+```
+ ## Key-value Pairs
For query field extraction, specify the fields you want to extract and Document
* In addition to the query fields, the response includes text, tables, selection marks, and other relevant data.
-### REST API
+### [REST API](#tab/rest-api)
```bash {your-resource-endpoint}.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2024-02-29-preview&features=queryFields&queryFields=TERMS ```
+### [Sample code](#tab/sample-code)
+```Python
+# Analyze a document at a URL:
+formUrl = "https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Data/invoice/simple-invoice.png?raw=true"
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=formUrl),
+ features=[DocumentAnalysisFeature.QUERY_FIELDS], # Specify which add-on capabilities to enable.
+ query_fields=["Address", "InvoiceNumber"], # Set the features and provide a comma-separated list of field names.
+)
+result: AnalyzeResult = poller.result()
+print("Here are extra fields in result:\n")
+if result.documents:
+ for doc in result.documents:
+ if doc.fields and doc.fields["Address"]:
+ print(f"Address: {doc.fields['Address'].value_string}")
+ if doc.fields and doc.fields["InvoiceNumber"]:
+ print(f"Invoice number: {doc.fields['InvoiceNumber'].value_string}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Add-on_capabilities/sample_analyze_addon_query_fields.py)
+
+### [Output](#tab/output)
+```json
+Address: 1 Redmond way Suite 6000 Redmond, WA Sunnayvale, 99243
+Invoice number: 34278587
+```
+++ ## Next steps
For query field extraction, specify the fields you want to extract and Document
> [!div class="nextstepaction"] > SDK samples: > [**python**](/python/api/overview/azure/ai-documentintelligence-readme)+
+> [!div class="nextstepaction"]
+> Find more samples:
+> [**Add-on capabilities**](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Add-on_capabilities)
+
+> [!div class="nextstepaction"]
+> Find more samples:
+> [**Add-on capabilities**](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Add-on_capabilities)
ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-business-card.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 05/23/2024
::: moniker range=">=doc-intel-3.0.0"
-The Document Intelligence business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from business card images. The API analyzes printed business cards; extracts key information such as first name, last name, company name, email address, and phone number; and returns a structured JSON data representation.
+The Document Intelligence business card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from business card images. The API analyzes printed business cards; extracts key information such as first name, surname, company name, email address, and phone number; and returns a structured JSON data representation.
## Business card data extraction
-Business cards are a great way to represent a business or a professional. The company logo, fonts and background images found in business cards help promote the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integration into for the benefit of their users.
+Business cards are a great way to represent a business or a professional. The company logo, fonts, and background images found in business cards help promote the company branding and differentiate it from others. Applying OCR and machine-learning based techniques to automate scanning of business cards is a common image processing scenario. Enterprise systems used by sales and marketing teams typically have business card data extraction capability integration into for the benefit of their users.
***Sample business card processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)***
Document Intelligence **v3.0:2022-08-31 (GA)** supports the following tools, app
| Feature | Resources | Model ID | |-|-|--|
-|**Business card model**| &bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)<br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)<br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-businessCard**|
+|**Business card model**| &bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)<br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)<br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)<br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-businessCard**|
::: moniker-end
See how data, including name, job title, address, email, and company name, is ex
1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted, and tables detected.
:::image type="content" source="media/business-card-results.png" alt-text="Screenshot of the business card model analyze results operation.":::
See how data, including name, job title, address, email, and company name, is ex
::: moniker range="doc-intel-2.1.0"
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* The supported file formats: JPEG, PNG, PDF, and TIFF
+* PDF and TIFF, up to 2,000 pages are processed. For free tier subscribers, only the first two pages are processed.
* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels. ::: moniker-end
See how data, including name, job title, address, email, and company name, is ex
| ContactNames | Array of objects | Contact name | | | FirstName | String | First (given) name of contact | | | LastName | String | Last (family) name of contact | |
-| CompanyNames | Array of strings | Company name(s)| |
-| Departments | Array of strings | Department(s) or organization(s) of contact | |
-| JobTitles | Array of strings | Listed Job title(s) of contact | |
-| Emails | Array of strings | Contact email address(es) | |
-| Websites | Array of strings | Company website(s) | |
-| Addresses | Array of strings | Address(es) extracted from business card | |
-| MobilePhones | Array of phone numbers | Mobile phone number(s) from business card |+1 xxx xxx xxxx |
-| Faxes | Array of phone numbers | Fax phone number(s) from business card | +1 xxx xxx xxxx |
-| WorkPhones | Array of phone numbers | Work phone number(s) from business card | +1 xxx xxx xxxx |
-| OtherPhones | Array of phone numbers | Other phone number(s) from business card | +1 xxx xxx xxxx |
+| CompanyNames | Array of strings | Company name| |
+| Departments | Array of strings | Department or organization of contact | |
+| JobTitles | Array of strings | Listed Job title of contact | |
+| Emails | Array of strings | Contact email address | |
+| Websites | Array of strings | Company website | |
+| Addresses | Array of strings | Address extracted from business card | |
+| MobilePhones | Array of phone numbers | Mobile phone number from business card |+1 xxx xxx xxxx |
+| Faxes | Array of phone numbers | Fax phone number from business card | +1 xxx xxx xxxx |
+| WorkPhones | Array of phone numbers | Work phone number from business card | +1 xxx xxx xxxx |
+| OtherPhones | Array of phone numbers | Other phone number from business card | +1 xxx xxx xxxx |
::: moniker-end
See how data, including name, job title, address, email, and company name, is ex
| JobTitles | array of strings | Listed Job title of contact | ["Software Engineer"] | | Emails | array of strings | Contact email extracted from business card | ["johndoe@contoso.com"] | | Websites | array of strings | Website extracted from business card | ["https://www.contoso.com"] |
-| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, WA 98052"] |
+| Addresses | array of strings | Address extracted from business card | ["123 Main Street, Redmond, Washington 98052"] |
| MobilePhones | array of phone numbers | Mobile phone number extracted from business card | ["+19876543210"] | | Faxes | array of phone numbers | Fax phone number extracted from business card | ["+19876543211"] | | WorkPhones | array of phone numbers | Work phone number extracted from business card | ["+19876543231"] |
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 05/23/2024
Document Intelligence **v3.0:2022-08-31 (GA)** supports the following tools, app
| Feature | Resources | |-|-|
-|_**Custom model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
-| _**Composed model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+|_**Custom model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
+| _**Composed model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
Document Intelligence v2.1 supports the following resources:
| Feature | Resources | |-|-| |_**Custom model**_| &bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</br>&bullet; [REST API](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&tabs=rest&preserve-view=true)</br>&bullet; [Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</br>&bullet; [Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)|
-| _**Composed model**_ |&bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>&bullet; [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/Compose)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>&bullet; JavaScript SDK</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+| _**Composed model**_ |&bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>&bullet; [REST API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>&bullet; JavaScript SDK</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
::: moniker-end ## Next steps
ai-services Concept Contract https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-contract.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.0.0'
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Contract model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-contract**|
+|**Contract model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-contract**|
::: moniker-end ## Input requirements
ai-services Concept Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-credit-card.md
Last updated 02/29/2024-+ monikerRange: '>=doc-intel-4.0.0' <!-- markdownlint-disable MD033 -->
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
Previously updated : 02/29/2024 Last updated : 05/23/2024 - references_regions
As of October 18, 2022, Document Intelligence custom neural model training will
> [!TIP] > You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed to **any other region** and use it accordingly. >
-> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
+> Use the [**REST API**](/rest/api/aiservices/document-models/copy-model-to?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
:::moniker-end
ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-template.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 05/23/2024 monikerRange: 'doc-intel-4.0.0 || <=doc-intel-3.1.0'
https://{endpoint}/formrecognizer/documentModels:build?api-version=2023-07-31
::: moniker range="doc-intel-2.1.0"
-Custom (template) models are generally available with the [v2.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm).
+Custom (template) models are generally available with the [v2.1 API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true).
| Model | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom model (template) | [Document Intelligence 2.1 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Document Intelligence Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
+| Custom model (template) | [Document Intelligence 2.1 ](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Document Intelligence Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
::: moniker-end
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
- ignite-2023 Previously updated : 02/29/2024 Last updated : 05/23/2024 monikerRange: '<=doc-intel-4.0.0'
The following table describes the features available with the associated tools a
|--|--|--|--| | Custom template v 4.0 v3.1 v3.0 | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| | Custom neural v4.0 v3.1 v3.0 | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
-| Custom form v2.1 | [Document Intelligence 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
+| Custom form v2.1 | [Document Intelligence 2.1 GA API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) | [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
> [!NOTE] > Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
ai-services Concept Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 05/10/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true) ::: moniker-end
+> [!IMPORTANT]
+>
+> * There are separate URLs for Document Intelligence Studio sovereign cloud regions.
+> * Azure for US Government: [Document Intelligence Studio (Azure Fairfax cloud)](https://formrecognizer.appliedai.azure.us/studio)
+> * Microsoft Azure operated by 21Vianet: [Document Intelligence Studio (Azure in China)](https://formrecognizer.appliedai.azure.cn/studio)
+ [Document Intelligence Studio](https://documentintelligence.ai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the Document Intelligence Studio to: * Learn more about the different capabilities in Document Intelligence.
monikerRange: '>=doc-intel-3.0.0'
* Experiment with different add-on and preview features to adapt the output to your needs. * Train custom classification models to classify documents. * Train custom extraction models to extract fields from documents.
-* Get sample code for the language-specific SDKs to integrate into your applications.
-
-Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with document analysis or prebuilt models. Build custom models and reference the models in your applications using one of the [language specific SDKs](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
-
-The following image shows the landing page for Document Intelligence Studio.
+* Get sample code for the language-specific `SDKs` to integrate into your applications.
+Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with document analysis or prebuilt models. Build custom models and reference the models in your applications using one of the [language specific `SDKs`](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
## Getting started
-If you're visiting the Studio for the first time, follow the [getting started guide](studio-overview.md#get-started-using-document-intelligence-studio) to set up the Studio for use.
+If you're visiting the Studio for the first time, follow the [getting started guide](studio-overview.md#get-started) to set up the Studio for use.
## Analyze options
If you're visiting the Studio for the first time, follow the [getting started gu
✔️ **Make use of the document list options and filters in custom projects**
-* In custom extraction model labeling page, you can now navigate through your training documents with ease by making use of the search, filter and sort by feature.
+* Use the custom extraction model labeling page to navigate through your training documents with ease by making use of the search, filter, and sort by feature.
* Utilize the grid view to preview documents or use the list view to scroll through the documents more easily.
If you're visiting the Studio for the first time, follow the [getting started gu
* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://documentintelligence.ai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model).
-* **Custom extraction models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://documentintelligence.ai.azure.com/studio/custommodel/projects). Use the help wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more.
+* **Custom extraction models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. To extract data from multiple form types, create standalone custom models or combine two, or more, custom models and create a composed model. Start with the [Studio Custom models feature](https://documentintelligence.ai.azure.com/studio/custommodel/projects). Use the help wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. To learn more, *see* the [Custom models overview](concept-custom.md) to learn more.
-* **Custom classification models**: Document classification is a new scenario supported by Document Intelligence. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. See [custom classification models](concept-custom-classifier.md) to learn more.
+* **Custom classification models**: Document classification is a new scenario supported by Document Intelligence. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. To learn more, *see* [custom classification models](concept-custom-classifier.md).
-* **Add-on Capabilities**: Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled in the studio using the `Analze Options` button in each model page. There are four add-on capabilities available: highResolution, formula, font, and barcode extraction capabilities. See [Add-on capabilities](concept-add-on-capabilities.md) to learn more.
+* **Add-on Capabilities**: Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled in the studio using the `Analze Options` button in each model page. There are four add-on capabilities available: highResolution, formula, font, and barcode extraction capabilities. To learn more, *see* [Add-on capabilities](concept-add-on-capabilities.md).
## Next steps
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 05/23/2024
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**General document model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-document**|
+|**General document model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-document**|
::: moniker-end ::: moniker range="doc-intel-3.1.0 || doc-intel-3.0.0"
ai-services Concept Health Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-health-insurance-card.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 05/23/2024 monikerRange: 'doc-intel-4.0.0 || >=doc-intel-3.0.0'
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Health insurance card model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**|
+|**Health insurance card model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**|
::: moniker-end ## Input requirements
ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-id-document.md
Previously updated : 03/06/2024 Last updated : 05/23/2024 - references.regions
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**ID document model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-idDocument**|
+|**ID document model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-idDocument**|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
Extract data, including name, birth date, and expiration date, from ID documents
The following are the fields extracted per document type. The Document Intelligence ID model `prebuilt-idDocument` extracts the following fields in the `documents.*.fields`. The json output includes all the extracted text in the documents, words, lines, and styles. ++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model/sample_analyze_identity_documents.py)
+++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Prebuilt_model/sample_analyze_identity_documents.py)
+++ ### `idDocument.driverLicense` | Field | Type | Description | Example |
The following are the fields extracted per document type. The Document Intellige
::: moniker-end
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Prebuilt_model)
+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model)
+ ::: moniker range="doc-intel-2.1.0" * Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
ai-services Concept Incremental Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-incremental-classifier.md
Previously updated : 02/17/2024 Last updated : 05/23/2024 monikerRange: '>=doc-intel-4.0.0'
Incremental training is useful when you want to improve the quality of a custom
### Create an incremental classifier build request
-The incremental classifier build request is similar to the [classify document build request](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2024-02-29-preview/operations/ClassifyDocument) but includes the new `baseClassifierId` property. The `baseClassifierId` is set to the existing classifier that you want to extend. You also need to provide the `docTypes` for the different document types in the sample set. By providing a `docType` that exists in the baseClassifier, the samples provided in the request are added to the samples provided when the base classifier was trained. New `docType` values added in the incremental training are only added to the new classifier. The process to specify the samples remains unchanged. For more information, *see* [training a classifier model](concept-custom-classifier.md#training-a-model).
+The incremental classifier build request is similar to the [classify document build request](/rest/api/aiservices/document-classifiers?view=rest-aiservices-v4.0%20(2024-02-29-preview)&preserve-view=true) but includes the new `baseClassifierId` property. The `baseClassifierId` is set to the existing classifier that you want to extend. You also need to provide the `docTypes` for the different document types in the sample set. By providing a `docType` that exists in the baseClassifier, the samples provided in the request are added to the samples provided when the base classifier was trained. New `docType` values added in the incremental training are only added to the new classifier. The process to specify the samples remains unchanged. For more information, *see* [training a classifier model](concept-custom-classifier.md#training-a-model).
### Sample POST request
ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md
Previously updated : 02/29/2024 Last updated : 05/23/2024
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Invoice model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-invoice**|
+|**Invoice model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-invoice**|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
See how data, including customer information, vendor details, and line items, is
*See* our [Language SupportΓÇöprebuilt models](language-support-prebuilt.md) page for a complete list of supported languages. ## Field extraction
+The Document Intelligence invoice model `prebuilt-invoice` extracts the following fields.
++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model/sample_analyze_invoices.py)
+++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Prebuilt_model/sample_analyze_invoices.py)
+ |Name| Type | Description | Standardized output |
-|:--|:-|:-|::|
-| CustomerName | String | Invoiced customer| |
-| CustomerId | String | Customer reference ID | |
-| PurchaseOrder | String | Purchase order reference number | |
-| InvoiceId | String | ID for this specific invoice (often "Invoice Number") | |
-| InvoiceDate | Date | Date the invoice was issued | yyyy-mm-dd|
-| DueDate | Date | Date payment for this invoice is due | yyyy-mm-dd|
-| VendorName | String | Vendor name | |
-| VendorTaxId | String | The taxpayer number associated with the vendor | |
-| VendorAddress | String | Vendor mailing address| |
-| VendorAddressRecipient | String | Name associated with the VendorAddress | |
-| CustomerAddress | String | Mailing address for the Customer | |
-| CustomerTaxId | String | The taxpayer number associated with the customer | |
-| CustomerAddressRecipient | String | Name associated with the CustomerAddress | |
-| BillingAddress | String | Explicit billing address for the customer | |
-| BillingAddressRecipient | String | Name associated with the BillingAddress | |
-| ShippingAddress | String | Explicit shipping address for the customer | |
-| ShippingAddressRecipient | String | Name associated with the ShippingAddress | |
-| PaymentTerm | String | The terms of payment for the invoice | |
- |Sub&#8203;Total| Number | Subtotal field identified on this invoice | Integer |
-| TotalTax | Number | Total tax field identified on this invoice | Integer |
-| InvoiceTotal | Number (USD) | Total new charges associated with this invoice | Integer |
-| AmountDue | Number (USD) | Total Amount Due to the vendor | Integer |
-| ServiceAddress | String | Explicit service address or property address for the customer | |
-| ServiceAddressRecipient | String | Name associated with the ServiceAddress | |
-| RemittanceAddress | String | Explicit remittance or payment address for the customer | |
-| RemittanceAddressRecipient | String | Name associated with the RemittanceAddress | |
-| ServiceStartDate | Date | First date for the service period (for example, a utility bill service period) | yyyy-mm-dd |
-| ServiceEndDate | Date | End date for the service period (for example, a utility bill service period) | yyyy-mm-dd|
-| PreviousUnpaidBalance | Number | Explicit previously unpaid balance | Integer |
-| CurrencyCode | String | The currency code associated with the extracted amount | |
-| KVKNumber(NL-only) | String | A unique identifier for businesses registered in the Netherlands|12345678|
-| PaymentDetails | Array | An array that holds Payment Option details such as `IBAN`,`SWIFT`, `BPay(AU)` | |
-| TotalDiscount | Number | The total discount applied to an invoice | Integer |
-| TaxItems | Array | AN array that holds added tax information such as `CGST`, `IGST`, and `SGST`. This line item is currently only available for the Germany (`de`), Spain (`es`), Portugal (`pt`), and English Canada (`en-CA`) locales| |
-
-### Line items
+|:--|:-|:-|:-|
+| CustomerName |string | Invoiced customer|Microsoft Corp|
+| CustomerId |string | Customer reference ID |CID-12345 |
+| PurchaseOrder |string | Purchase order reference number |PO-3333 |
+| InvoiceId |string | ID for this specific invoice (often Invoice Number) |INV-100 |
+| InvoiceDate |date |date the invoice was issued | mm-dd-yyyy|
+| DueDate |date |date payment for this invoice is due |mm-dd-yyyy|
+| VendorName |string | Vendor who created this invoice |CONTOSO LTD.|
+| VendorAddress |address| Vendor mailing address| 123 456th St, New York, NY 10001 |
+| VendorAddressRecipient |string | Name associated with the VendorAddress |Contoso Headquarters |
+| CustomerAddress |address | Mailing address for the Customer | 123 Other St, Redmond WA, 98052|
+| CustomerAddressRecipient |string | Name associated with the CustomerAddress |Microsoft Corp |
+| BillingAddress |address | Explicit billing address for the customer | 123 Bill St, Redmond WA, 98052 |
+| BillingAddressRecipient |string | Name associated with the BillingAddress |Microsoft Services |
+| ShippingAddress |address | Explicit shipping address for the customer | 123 Ship St, Redmond WA, 98052|
+| ShippingAddressRecipient |string | Name associated with the ShippingAddress |Microsoft Delivery |
+|Sub&#8203;Total| currency| Subtotal field identified on this invoice | $100.00 |
+| TotalDiscount | currency | The total discount applied to an invoice | $5.00 |
+| TotalTax | currency| Total tax field identified on this invoice | $10.00 |
+| InvoiceTotal | currency | Total new charges associated with this invoice | $10.00 |
+| AmountDue | currency | Total Amount Due to the vendor | $610 |
+| PreviousUnpaidBalance | currency| Explicit previously unpaid balance | $500.00 |
+| RemittanceAddress |address| Explicit remittance or payment address for the customer |123 Remit St New York, NY, 10001 |
+| RemittanceAddressRecipient |string | Name associated with the RemittanceAddress |Contoso Billing |
+| ServiceAddress |address | Explicit service address or property address for the customer |123 Service St, Redmond WA, 98052 |
+| ServiceAddressRecipient |string | Name associated with the ServiceAddress |Microsoft Services |
+| ServiceStartDate |date | First date for the service period (for example, a utility bill service period) | mm-dd-yyyy |
+| ServiceEndDate |date | End date for the service period (for example, a utility bill service period) | mm-dd-yyyy|
+| VendorTaxId |string | The taxpayer number associated with the vendor |123456-7 |
+|CustomerTaxId|string|The taxpayer number associated with the customer|765432-1|
+| PaymentTerm |string | The terms of payment for the invoice |Net90 |
+| KVKNumber |string | A unique identifier for businesses registered in the Netherlands (NL-only)|12345678|
+| CurrencyCode |string | The currency code associated with the extracted amount | |
+| PaymentDetails | array | An array that holds Payment Option details such as `IBAN`,`SWIFT`, `BPayBillerCode(AU)`, `BPayReference(AU)` | |
+|TaxDetails|array|An array that holds tax details like amount and rate||
+| TaxDetails | array | AN array that holds added tax information such as `CGST`, `IGST`, and `SGST`. This line item is currently only available for the Germany (`de`), Spain (`es`), Portugal (`pt`), and English Canada (`en-CA`) locales| |
+
+### Line items array
Following are the line items extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg):
-|Name| Type | Description | Text (line item #1) | Value (standardized output) |
-|:--|:-|:-|:-| :-|
-| Items | String | Full string text line of the line item | 3/4/2021 A123 Consulting Services 2 hours $30.00 10% $60.00 | |
-| Amount | Number | The amount of the line item | $60.00 | 100 |
-| Description | String | The text description for the invoice line item | Consulting service | Consulting service |
-| Quantity | Number | The quantity for this invoice line item | 2 | 2 |
-| UnitPrice | Number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 | 30 |
-| ProductCode | String| Product code, product number, or SKU associated with the specific line item | A123 | |
-| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | |
-| Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 |
-| Tax | Number | Tax associated with each line item. Possible values include tax amount and tax Y/N | 10.00 | |
-| TaxRate | Number | Tax Rate associated with each line item. | 10% | |
+|Name| Type | Description | Value (standardized output) |
+|:--|:-|:-|:-|
+| Amount | currency | The amount of the line item | $60.00 |
+| Date | date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021|
+| Description | string | The text description for the invoice line item | Consulting service|
+| Quantity | number | The quantity for this invoice line item | 2 |
+| ProductCode | string| Product code, product number, or SKU associated with the specific line item | A123|
+| Tax | currency | Tax associated with each line item. Possible values include tax amount and tax Y/N | $6.00 |
+| TaxRate | string | Tax Rate associated with each line item. | 18%|
+| Unit | string| The unit of the line item, e.g, kg, lb etc. | Hours|
+| UnitPrice | number | The net or gross price (depending on the gross invoice setting of the invoice) of one unit of this item | $30.00 |
The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output. ### Key-value pairs
The following are the line items extracted from an invoice in the JSON output re
| Date | date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 | | Tax | number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
+The following are complex fields extracted from an invoice in the JSON output response:
+
+### TaxDetails
+Tax details aims at breaking down the different taxes applied to the invoice total.
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| Items | string | Full string text line of the tax item | V.A.T. 15% $60.00 | |
+| Amount | number | The tax amount of the tax item | 60.00 | 60 |
+| Rate | string | The tax rate of the tax item | 15% | |
+
+### PaymentDetails
+List all the detected payment options detected on the field.
+
+|Name| Type | Description | Text (line item #1) | Value (standardized output) |
+|:--|:-|:-|:-| :-|
+| IBAN | string | Internal Bank Account Number | GB33BUKB20201555555555 | |
+| SWIFT | string | SWIFT code | BUKBGB22 | |
+| BPayBillerCode | string | Australian B-Pay Biller Code | 12345 | |
+| BPayReference | string | Australian B-Pay Reference Code | 98765432100 | |
++ ### JSON output The JSON output has three parts:
The JSON output has three parts:
::: moniker-end
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Prebuilt_model)
+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model)
+ ::: moniker range="doc-intel-2.1.0" * Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md
- ignite-2023 Previously updated : 02/21/2024 Last updated : 05/23/2024
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Layout model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-layout**|
+|**Layout model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-layout**|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
The pages collection is a list of pages within the document. Each page is repres
|PowerPoint (PPTX) | Each slide = 1 page unit, embedded or linked images not supported | Total slides | |HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each | + ```json "pages": [ {
The pages collection is a list of pages within the document. Each page is repres
] ``` ++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze pages.
+for page in result.pages:
+ print(f"-Analyzing layout from page #{page.page_number}-")
+ print(
+ f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": []
+ }
+]
+```
+++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze pages.
+for page in result.pages:
+ print(f"-Analyzing layout from page #{page.page_number}-")
+ print(f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": []
+ }
+]
+```
++++ ### Extract selected pages from documents For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
The document layout model in Document Intelligence extracts print and handwritte
For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence versions 2024-02-29-preview and 2023-10-31-preview Layout model extract all embedded text as is. Texts are extracted as words and paragraphs. Embedded images aren't supported. ++ ```json "words": [ {
For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence versions
} ] ```++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze lines.
+for line_idx, line in enumerate(page.lines):
+ words = line.get_words()
+ print(
+ f"...Line # {line_idx} has word count {len(words)} and text '{line.content}' "
+ f"within bounding polygon '{format_polygon(line.polygon)}'"
+ )
+
+ # Analyze words.
+ for word in words:
+ print(
+ f"......Word '{word.content}' has a confidence of {word.confidence}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
+++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze lines.
+if page.lines:
+ for line_idx, line in enumerate(page.lines):
+ words = get_words(page, line)
+ print(
+ f"...Line # {line_idx} has word count {len(words)} and text '{line.content}' "
+ f"within bounding polygon '{line.polygon}'"
+ )
+
+ # Analyze words.
+ for word in words:
+ print(f"......Word '{word.content}' has a confidence of {word.confidence}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
++ ### Handwritten style for text lines
If you enable the [font/style addon capability](concept-add-on-capabilities.md#f
The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). The text representation (that is, `:selected:` and `:unselected`) is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document. +
+```json
+{
+ "selectionMarks": [
+ {
+ "state": "unselected",
+ "polygon": [],
+ "confidence": 0.995,
+ "span": {
+ "offset": 1421,
+ "length": 12
+ }
+ }
+ ]
+}
+```
++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze selection marks.
+for selection_mark in page.selection_marks:
+ print(
+ f"Selection mark is '{selection_mark.state}' within bounding polygon "
+ f"'{format_polygon(selection_mark.polygon)}' and has a confidence of {selection_mark.confidence}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
```json { "selectionMarks": [
The Layout model also extracts selection marks from documents. Extracted selecti
] } ```+++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze selection marks.
+if page.selection_marks:
+ for selection_mark in page.selection_marks:
+ print(
+ f"Selection mark is '{selection_mark.state}' within bounding polygon "
+ f"'{selection_mark.polygon}' and has a confidence of {selection_mark.confidence}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+{
+ "selectionMarks": [
+ {
+ "state": "unselected",
+ "polygon": [],
+ "confidence": 0.995,
+ "span": {
+ "offset": 1421,
+ "length": 12
+ }
+ }
+ ]
+}
+```
++ ### Tables
Extracting tables is a key requirement for processing documents containing large
> [!NOTE] > Table is not supported if the input file is XLSX. ++ ```json { "tables": [
Extracting tables is a key requirement for processing documents containing large
} ``` +
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze tables.
+for table_idx, table in enumerate(result.tables):
+ print(
+ f"Table # {table_idx} has {table.row_count} rows and "
+ f"{table.column_count} columns"
+ )
+ for region in table.bounding_regions:
+ print(
+ f"Table # {table_idx} location on page: {region.page_number} is {format_polygon(region.polygon)}"
+ )
+ for cell in table.cells:
+ print(
+ f"...Cell[{cell.row_index}][{cell.column_index}] has text '{cell.content}'"
+ )
+ for region in cell.bounding_regions:
+ print(
+ f"...content on page {region.page_number} is within bounding polygon '{format_polygon(region.polygon)}'"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+{
+ "tables": [
+ {
+ "rowCount": 9,
+ "columnCount": 4,
+ "cells": [
+ {
+ "kind": "columnHeader",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "columnSpan": 4,
+ "content": "(In millions, except earnings per share)",
+ "boundingRegions": [],
+ "spans": []
+ },
+ ]
+ }
+ ]
+}
+
+```
+++
+#### [Sample code](#tab/sample-code)
+```Python
+if result.tables:
+ for table_idx, table in enumerate(result.tables):
+ print(f"Table # {table_idx} has {table.row_count} rows and " f"{table.column_count} columns")
+ if table.bounding_regions:
+ for region in table.bounding_regions:
+ print(f"Table # {table_idx} location on page: {region.page_number} is {region.polygon}")
+ # Analyze cells.
+ for cell in table.cells:
+ print(f"...Cell[{cell.row_index}][{cell.column_index}] has text '{cell.content}'")
+ if cell.bounding_regions:
+ for region in cell.bounding_regions:
+ print(f"...content on page {region.page_number} is within bounding polygon '{region.polygon}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+```json
+{
+ "tables": [
+ {
+ "rowCount": 9,
+ "columnCount": 4,
+ "cells": [
+ {
+ "kind": "columnHeader",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "columnSpan": 4,
+ "content": "(In millions, except earnings per share)",
+ "boundingRegions": [],
+ "spans": []
+ },
+ ]
+ }
+ ]
+}
+
+```
+ ::: moniker-end + ### Annotations (available only in ``2023-02-28-preview`` API.) The Layout model extracts annotations in documents, such as checks and crosses. The response includes the kind of annotation, along with a confidence score and bounding polygon.
The Layout model extracts annotations in documents, such as checks and crosses.
] } ``` ### Output to markdown format The Layout API can output the extracted text in markdown format. Use the `outputContentFormat=markdown` to specify the output format in markdown. The markdown content is output as part of the `content` section.
-```json
-"analyzeResult": {
-"apiVersion": "2024-02-29-preview",
-"modelId": "prebuilt-layout",
-"contentFormat": "markdown",
-"content": "# CONTOSO LTD...",
-}
+#### [Sample code](#tab/sample-code)
+```Python
+document_intelligence_client = DocumentIntelligenceClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=url),
+ output_content_format=ContentFormat.MARKDOWN,
+)
```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/documentintelligence/azure-ai-documentintelligence/samples/sample_analyze_documents_output_in_markdown.py)
+
+#### [Output](#tab/output)
+
+```Markdown
+<!-- PageHeader="This is the header of the document." -->
+
+This is title
+===
+# 1\. Text
+Latin refers to an ancient Italic language originating in the region of Latium in ancient Rome.
+# 2\. Page Objects
+## 2.1 Table
+Here's a sample table below, designed to be simple for easy understand and quick reference.
+| Name | Corp | Remark |
+| - | - | - |
+| Foo | | |
+| Bar | Microsoft | Dummy |
+Table 1: This is a dummy table
+## 2.2. Figure
+<figure>
+<figcaption>
+
+Figure 1: Here is a figure with text
+</figcaption>
+
+![](figures/0)
+<!-- FigureContent="500 450 400 400 350 250 200 200 200- Feb" -->
+</figure>
+
+# 3\. Others
+Al Document Intelligence is an Al service that applies advanced machine learning to extract text, key-value pairs, tables, and structures from documents automatically and accurately:
+ :selected:
+clear
+ :selected:
+precise
+ :unselected:
+vague
+ :selected:
+coherent
+ :unselected:
+Incomprehensible
+Turn documents into usable data and shift your focus to acting on information rather than compiling it. Start with prebuilt models or create custom models tailored to your documents both on premises and in the cloud with the Al Document Intelligence studio or SDK.
+Learn how to accelerate your business processes by automating text extraction with Al Document Intelligence. This webinar features hands-on demos for key use cases such as document processing, knowledge mining, and industry-specific Al model customization.
+<!-- PageFooter="This is the footer of the document." -->
+<!-- PageFooter="1 | Page" -->
+```
++ ### Figures Figures (charts, images) in documents play a crucial role in complementing and enhancing the textual content, providing visual representations that aid in the understanding of complex information. The figures object detected by the Layout model has key properties like `boundingRegions` (the spatial locations of the figure on the document pages, including the page number and the polygon coordinates that outline the figure's boundary), `spans` (details the text spans related to the figure, specifying their offsets and lengths within the document's text. This connection helps in associating the figure with its relevant textual context), `elements` (the identifiers for text elements or paragraphs within the document that are related to or describe the figure) and `caption` if there's any.
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze figures.
+if result.figures:
+ for figures_idx,figures in enumerate(result.figures):
+ print(f"Figure # {figures_idx} has the following spans:{figures.spans}")
+ for region in figures.bounding_regions:
+ print(f"Figure # {figures_idx} location on page:{region.page_number} is within bounding polygon '{region.polygon}'")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Layout_model/sample_analyze_layout.py)
+
+#### [Output](#tab/output)
+ ```json { "figures": [
Figures (charts, images) in documents play a crucial role in complementing and e
] } ``` + ### Sections Hierarchical document structure analysis is pivotal in organizing, comprehending, and processing extensive documents. This approach is vital for semantically segmenting long documents to boost comprehension, facilitate navigation, and improve information retrieval. The advent of [Retrieval Augmented Generation (RAG)](concept-retrieval-augmented-generation.md) in document generative AI underscores the significance of hierarchical document structure analysis. The Layout model supports sections and subsections in the output, which identifies the relationship of sections and object within each section. The hierarchical structure is maintained in `elements` of each section. You can use [output to markdown format](#output-to-markdown-format) to easily get the sections and subsections in markdown.
+#### [Sample code](#tab/sample-code)
+```Python
+document_intelligence_client = DocumentIntelligenceClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+poller = document_intelligence_client.begin_analyze_document(
+ "prebuilt-layout",
+ AnalyzeDocumentRequest(url_source=url),
+ output_content_format=ContentFormat.MARKDOWN,
+)
+
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/documentintelligence/azure-ai-documentintelligence/samples/sample_analyze_documents_output_in_markdown.py)
+
+#### [Output](#tab/output)
```json { "sections": [
Hierarchical document structure analysis is pivotal in organizing, comprehending
} ``` + + ### Natural reading order output (Latin only)
Layout API also extracts selection marks from documents. Extracted selection mar
::: moniker-end
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Layout_model)
+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Layout_model)
+ ::: moniker range="doc-intel-2.1.0" * [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
ai-services Concept Marriage Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-marriage-certificate.md
Previously updated : 02/29/2024- Last updated : 04/23/2024+ monikerRange: '>=doc-intel-4.0.0' <!-- markdownlint-disable MD033 -->
Document Intelligence v4.0 (2024-02-29-preview) supports the following tools, ap
| Feature | Resources | Model ID | |-|-|--|
-|**Contract model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-marriageCertficute.us**|
+|**prebuilt-marriageCertificate.us**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=marriageCertificate.us&formType=marriageCertificate.us)</br>&bullet; [**REST API**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-marriageCertificate.us**|
::: moniker-end ## Input requirements
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 05/23/2024
The following table shows the available models for each current preview and stable API:
-|**Model Type**| **Model**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br> &bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+|**Model Type**| **Model**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br> &bullet; [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|
|-|--||--||| |Document analysis models|[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a| |Document analysis models|[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️|
The following table shows the available models for each current preview and stab
\* - Contains sub-models. See the model specific information for supported variations and sub-types.
-|**Add-on Capability**| **Add-On/Free**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br>&bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)|
+|**Add-on Capability**| **Add-On/Free**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br>&bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|
|-|--||--||| |Font property extraction|Add-On| ✔️| ✔️| n/a| n/a| |Formula extraction|Add-On| ✔️| ✔️| n/a| n/a|
ai-services Concept Mortgage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-mortgage-documents.md
Previously updated : 02/29/2024- Last updated : 05/07/2024+ monikerRange: '>=doc-intel-4.0.0' <!-- markdownlint-disable MD033 -->
The Document Intelligence Mortgage models use powerful Optical Character Recogni
**Supported document types:**
-* 1003 End-User License Agreement (EULA)
-* Form 1008
-* Mortgage closing disclosure
+* Uniform Residential Loan Application (Form 1003)
+* Uniform Underwriting and Transmittal Summary (Form 1008)
+* Closing Disclosure form
## Development options
To see how data extraction works for the mortgage documents service, you need th
*See* our [Language SupportΓÇöprebuilt models](language-support-prebuilt.md) page for a complete list of supported languages.
-## Field extraction 1003 End-User License Agreement (EULA)
+## Field extraction 1003 Uniform Residential Loan Application (URLA)
-The following are the fields extracted from a 1003 EULA form in the JSON output response.
+The following are the fields extracted from a 1003 URLA form in the JSON output response.
|Name| Type | Description | Example output | |:--|:-|:-|::|
The following are the fields extracted from a 1003 EULA form in the JSON output
| Loan| Object | An object that contains loan information including: amount, purpose type, refinance type.| | | Property | object | An object that contains information about the property including: address, number of units, value.| |
-The 1003 EULA key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+The 1003 URLA key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
-## Field extraction form 1008
+## Field extraction 1008 Uniform Underwriting and Transmittal Summary
The following are the fields extracted from a 1008 form in the JSON output response.
The following are the fields extracted from a mortgage closing disclosure form i
| Transaction | Object | An object that contains information about the transaction information including: Borrowers name, Borrowers address, Seller name.| | | Loan | Object | An object that contains loan information including: term, purpose, product. | | - The mortgage closing disclosure key-value pairs and line items extracted are in the `documentResults` section of the JSON output. ## Next steps
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
- ignite-2023 Previously updated : 02/09/2024 Last updated : 05/23/2024
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-read**|
+|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-read**|
::: moniker-end ## Input requirements
The pages collection is a list of pages within the document. Each page is repres
|PowerPoint (PPTX) | Each slide = 1 page unit, embedded or linked images not supported | Total slides | |HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each | + ```json "pages": [ {
The pages collection is a list of pages within the document. Each page is repres
} ] ```++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze pages.
+for page in result.pages:
+ print(f"-Analyzing document from page #{page.page_number}-")
+ print(
+ f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Read_model/sample_analyze_read.py)
+
+#### [Output](#tab/output)
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": []
+ }
+]
+```
++++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze pages.
+for page in result.pages:
+ print(f"-Analyzing document from page #{page.page_number}-")
+ print(f"Page has width: {page.width} and height: {page.height}, measured with unit: {page.unit}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Read_model/sample_analyze_read.py)
+
+#### [Output](#tab/output)
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": []
+ }
+]
+```
+ ### Select pages for text extraction
The Read OCR model extracts print and handwritten style text as `lines` and `wor
For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence Read model v3.1 and later versions extracts all embedded text as is. Texts are extrated as words and paragraphs. Embedded images aren't supported.
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
+
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze lines.
+for line_idx, line in enumerate(page.lines):
+ words = line.get_words()
+ print(
+ f"...Line # {line_idx} has {len(words)} words and text '{line.content}' within bounding polygon '{format_polygon(line.polygon)}'"
+ )
+
+ # Analyze words.
+ for word in words:
+ print(
+ f"......Word '{word.content}' has a confidence of {word.confidence}"
+ )
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Read_model/sample_analyze_read.py)
+#### [Output](#tab/output)
```json "words": [ {
For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence Read mode
} ] ```+++
+#### [Sample code](#tab/sample-code)
+```Python
+# Analyze lines.
+if page.lines:
+ for line_idx, line in enumerate(page.lines):
+ words = get_words(page, line)
+ print(
+ f"...Line # {line_idx} has {len(words)} words and text '{line.content}' within bounding polygon '{line.polygon}'"
+ )
+
+ # Analyze words.
+ for word in words:
+ print(f"......Word '{word.content}' has a confidence of {word.confidence}")
+```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Read_model/sample_analyze_read.py)
+
+#### [Output](#tab/output)
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
+ ### Handwritten style for text lines
Explore our REST API:
> [!div class="nextstepaction"] > [Document Intelligence API v4.0](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)+
+Find more samples on GitHub:
+> [!div class="nextstepaction"]
+> [Read model.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Read_model)
++
+Find more samples on GitHub:
+> [!div class="nextstepaction"]
+> [Read model.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Read_model)
+
ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-receipt.md
- ignite-2023 Previously updated : 02/29/2024 Last updated : 05/23/2024
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Receipt model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-receipt**|
+|**Receipt model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-receipt**|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
See how Document Intelligence extracts data, including time and date of transact
Document Intelligence v3.0 and later versions introduce several new features and capabilities. In addition to thermal receipts, the **Receipt** model supports single-page hotel receipt processing and tax detail extraction for all receipt types. Document Intelligence v4.0 and later versions introduces support for currency for all price-related fields for thermal and hotel receipts.
+
++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model/sample_analyze_receipts.py)
+++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Prebuilt_model/sample_analyze_receipts.py)
++ ### Receipt
See how Document Intelligence extracts data, including time and date of transact
::: moniker-end
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Prebuilt_model)
+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model)
+ ::: moniker range="doc-intel-2.1.0" * Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
ai-services Concept Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-retrieval-augmented-generation.md
docs_string = docs[0].page_content
splits = text_splitter.split_text(docs_string) splits ```
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Retrieval_Augmented_Generation_(RAG)_samples/sample_rag_langchain.ipynb)
+ ## Next steps
ai-services Concept Tax Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-tax-document.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.0.0'
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**US tax form models**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**&bullet; prebuilt-tax.us.W-2</br>&bullet; prebuilt-tax.us.1098</br>&bullet; prebuilt-tax.us.1098E</br>&bullet; prebuilt-tax.us.1098T**|
+|**US tax form models**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**&bullet; prebuilt-tax.us.W-2</br>&bullet; prebuilt-tax.us.1098</br>&bullet; prebuilt-tax.us.1098E</br>&bullet; prebuilt-tax.us.1098T**|
::: moniker-end ## Input requirements
See how data, including customer information, vendor details, and line items, is
The following are the fields extracted from a W-2 tax form in the JSON output response. +
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model/sample_analyze_tax_us_w2.py)
+++
+> [!div class="nextstepaction"]
+> [View samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/blob/main/Python(v4.0)/Prebuilt_model/sample_analyze_tax_us_w2.py)
++ |Name| Type | Description | Example output |dependents |:--|:-|:-|::| | `W-2FormVariant`| String | IR W-2 Form variant. This field can have the one of the following values: `W-2`, `W-2AS`, `W-2CM`, `W-2GU`, or `W-2VI`| W-2 |
The tax documents key-value pairs and line items extracted are in the `documentR
* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). * Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main/Python(v4.0)/Prebuilt_model)
+
+* [Find more samples on GitHub.](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)/Python(v3.1)/Prebuilt_model)
ai-services Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/configuration.md
- ignite-2023 Previously updated : 12/13/2023 Last updated : 05/23/2024
:::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"
-Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only:
+Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, Invoice, Receipt and ID Document models:
-* [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
+* [REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
+* [REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
* [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md) * [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
ai-services Disconnected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md
Previously updated : 12/13/2023 Last updated : 05/23/2024
:::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"
-Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only:
+Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, Invoice, Receipt and ID Document models:
-* [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
+* [REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
+* [REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
* [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md) * [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
ai-services Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/image-tags.md
- ignite-2023 Previously updated : 12/13/2023 Last updated : 05/23/2024
:::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"
-Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only:
+Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, Invoice, Receipt and ID Document models:
-* [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
+* [REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
+* [REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
* [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md) * [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
The following containers support DocumentIntelligence v3.0 models and features:
|[**Document Intelligence Studio**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/studio/tags)| `mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:latest`| | [**Read 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/read-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1:latest`| | [**Layout 3.1**](https://mcr.microsoft.com/en-us/product/azure-cognitive-services/form-recognizer/layout-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.1:latest`|
+| [**Invoice 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/invoice-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.1:latest`|
+| [**ID Document 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/id-document-3.1/tags) | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.1:latest` |
+| [**Receipt 3.1**](https://mcr.microsoft.com/product/azure-cognitive-services/form-recognizer/receipt-3.1/tags) |`mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.1:latest`|
::: moniker-end
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
- ignite-2023 Previously updated : 01/17/2024 Last updated : 05/23/2024
<!-- markdownlint-disable MD051 --> :::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"- Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read and Layout only:
-* [REST API `2022-08-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
-* [REST API `2023-07-31 (GA)`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
-* [SDKs targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)
-* [SDKs targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
+* [REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
+* [REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
+* [Client libraries targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)
+* [Client libraries targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
-✔️ See [**Install and run Document Intelligence v3.0 containers**](?view=doc-intel-3.0.0&preserve-view=true) for supported container documentation.
+✔️ See [**Install and run Document Intelligence containers**](?view=doc-intel-3.1.0&preserve-view=true) for supported container documentation.
:::moniker-end
In this article you learn how to download, install, and run Document Intelligenc
## Prerequisites
-To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
You also need the following to use Document Intelligence containers:
The host is a x64-based computer that runs the Docker container. It can be a com
#### Required supporting containers
-The following table lists the supporting container(s) for each Document Intelligence container you download. For more information, see the [Billing](#billing) section.
+The following table lists one or more supporting containers for each Document Intelligence container you download. For more information, see the [Billing](#billing) section.
-Feature container | Supporting container(s) |
+Feature container | Supporting containers |
||--| | **Read** | Not required | | **Layout** | Not required|
Feature container | Supporting container(s) |
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot of Azure portal keys and endpoint page.":::
-* Ensure that the EULA value is set to *accept*.
+* Ensure that the `EULA` value is set to *accept*.
* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container can't start.
Feature container | Supporting container(s) |
### [Read](#tab/read)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9" azure-form-recognizer-read: container_name: azure-form-recognizer-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
docker-compose up
### [General Document](#tab/general-document)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your General Document and Layout container instances.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your General Document and Layout container instances.
```yml version: "3.9"
Given the resources on the machine, the General Document container might take so
### [Layout](#tab/layout)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9" azure-form-recognizer-layout: container_name: azure-form-recognizer-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
docker-compose up
### [Invoice](#tab/invoice)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence Invoice container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout container instances.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence Invoice container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout container instances.
+
+You must use 3.1 GA Layout image as an upstream for both 3.0 GA and 3.1 GA Invoice models.
```yml version: "3.9" azure-cognitive-service-invoice: container_name: azure-cognitive-service-invoice
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- "5000:5050" azure-cognitive-service-layout: container_name: azure-cognitive-service-layout
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
docker-compose up
### [Receipt](#tab/receipt)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt and Read container instances.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt and Read container instances.
+
+You can use 3.1 GA Layout image as an upstream instead of Read image.
```yml version: "3.9" azure-cognitive-service-receipt: container_name: azure-cognitive-service-receipt
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- "5000:5050" azure-cognitive-service-read: container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
docker-compose up
### [ID Document](#tab/id-document)
-The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances.
+The following code sample is a self-contained `docker compose` example to run the Document Intelligence General Document container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID and Read container instances.
+
+You can use 3.1 GA Layout image as an upstream instead of Read image.
```yml version: "3.9" azure-cognitive-service-id-document: container_name: azure-cognitive-service-id-document
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
- "5000:5050" azure-cognitive-service-read: container_name: azure-cognitive-service-read
- image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.1
environment: - EULA=accept - billing={FORM_RECOGNIZER_ENDPOINT_URI}
In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
-#### Create a folder to store the following files
+#### Create a folder and store the following files
* [**.env**](#create-an-environment-file) * [**nginx.conf**](#create-an-nginx-file) * [**docker-compose.yml**](#create-a-docker-compose-file)
-#### Create a folder to store your input data
+#### Create a folder and store your input data
* Name this folder **files**. * We reference the file path for this folder as **{FILE_MOUNT_PATH}**.
http {
1. Name this file **docker-compose.yml**
-2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Studio and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Studio, and Custom template containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
```yml version: '3.3'
$source .env
$docker-compose up ```
-Custom template containers require a few different configurations and support other optional configurations
+Custom template containers require a few different configurations and support other optional configurations.
| Setting | Required | Description | |--||-|
-|EULA | Yes | License acceptance Example: Eula=accept|
+|`EULA` | Yes | License acceptance Example: Eula=accept|
|Billing | Yes | Billing endpoint URI of the FR resource | |ApiKey | Yes | The endpoint key of the FR resource | | Queue:Azure:ConnectionString | No| Azure Queue connection string | |Storage:ObjectStore:AzureBlob:ConnectionString | No| Azure Blob connection string | | HealthCheck:MemoryUpperboundInMB | No | Memory threshold for reporting unhealthy to liveness. Default: Same as recommended memory |
-| StorageTimeToLiveInMinutes | No| TTL duration to remove all intermediate and final files. Default: Two days, TTL can set between five minutes to seven days |
+| StorageTimeToLiveInMinutes | No| `TTL` duration to remove all intermediate and final files. Default: Two days, `TTL` can set between five minutes to seven days |
| Task:MaxRunningTimeSpanInMinutes | No| Maximum running time for treating request as timeout. Default: 60 minutes | | HTTP_PROXY_BYPASS_URLS | No | Specify URLs for bypassing proxy Example: HTTP_PROXY_BYPASS_URLS = abc.com, xyz.com | | AzureCognitiveServiceReadHost (Receipt, IdDocument Containers Only)| Yes | Specify Read container uri Example:AzureCognitiveServiceReadHost=http://onprem-frread:5000 |
Custom template containers require a few different configurations and support ot
* Provide a subfolder for where your training data is located within the files folder. * Finally, create the project
-You should now have a project created, ready for labeling. Upload your training data and get started labeling. If you're new to labeling, see [build and train a custom model](../how-to-guides/build-a-custom-model.md)
+You should now have a project created, ready for labeling. Upload your training data and get started labeling. If you're new to labeling, see [build and train a custom model](../how-to-guides/build-a-custom-model.md).
#### Using the API to train
ai-services Create Document Intelligence Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-document-intelligence-resource.md
Title: Create a Document Intelligence (formerly Form Recognizer) resource
-description: Create a Document Intelligence resource in the Azure portal
+description: Create a Document Intelligence resource in the Azure portal.
- ignite-2023 Previously updated : 11/15/2023- Last updated : 04/24/2024+
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md
The Azure portal is a web-based console that enables you to manage your Azure su
> :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning."::: > > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
- > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
+ > * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.yml?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor).
1. Specify the signed key **Start** and **Expiry** times.
ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/disaster-recovery.md
- ignite-2023 Previously updated : 03/06/2024 Last updated : 04/23/2024
The process for copying a custom model consists of the following steps:
The following HTTP request gets copy authorization from your target resource. You need to enter the endpoint and key of your target resource as headers. ```http
-POST https://<your-resource-name>/documentintelligence/documentModels/{modelId}:copyTo?api-version=2024-02-29-preview
+POST https://<your-resource-endpoint>/documentintelligence/documentModels:authorizeCopy?api-version=2024-02-29-preview
Ocp-Apim-Subscription-Key: {<your-key>} ```
You receive a `200` response code with response body that contains the JSON payl
The following HTTP request starts the copy operation on the source resource. You need to enter the endpoint and key of your source resource as the url and header. Notice that the request URL contains the model ID of the source model you want to copy. ```http
-POST https://<your-resource-name>/documentintelligence/documentModels/{modelId}:copyTo?api-version=2024-02-29-preview
+POST https://<your-resource-endpoint>/documentintelligence/documentModels/{modelId}:copyTo?api-version=2024-02-29-preview
Ocp-Apim-Subscription-Key: {<your-key>} ```
You receive a `202\Accepted` response with an Operation-Location header. This va
```http HTTP/1.1 202 Accepted
-Operation-Location: https://<your-resource-name>.cognitiveservices.azure.com/documentintelligence/operations/{operation-id}?api-version=2024-02-29-preview
+Operation-Location: https://<your-resource-endpoint>.cognitiveservices.azure.com/documentintelligence/operations/{operation-id}?api-version=2024-02-29-preview
``` > [!NOTE]
Operation-Location: https://<your-resource-name>.cognitiveservices.azure.com/doc
## Track Copy progress ```console
-GET https://<your-resource-name>.cognitiveservices.azure.com/documentintelligence/operations/{<operation-id>}?api-version=2024-02-29-preview
+GET https://<your-resource-endpoint>.cognitiveservices.azure.com/documentintelligence/operations/{<operation-id>}?api-version=2024-02-29-preview
Ocp-Apim-Subscription-Key: {<your-key>} ```
Ocp-Apim-Subscription-Key: {<your-key>}
You can also use the **[Get model](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)** API to track the status of the operation by querying the target model. Call the API using the target model ID that you copied down from the [Generate Copy authorization request](#generate-copy-authorization-request) response. ```http
-GET https://<your-resource-name>/documentintelligence/documentModels/{modelId}?api-version=2024-02-29-preview" -H "Ocp-Apim-Subscription-Key: <your-key>
+GET https://<your-resource-endpoint>/documentintelligence/documentModels/{modelId}?api-version=2024-02-29-preview" -H "Ocp-Apim-Subscription-Key: <your-key>
``` In the response body, you see information about the model. Check the `"status"` field for the status of the model.
The following code snippets use cURL to make API calls. You also need to fill in
**Request** ```bash
-curl -i -X POST "<your-resource-name>/documentintelligence/documentModels:authorizeCopy?api-version=2024-02-29-preview"
+curl -i -X POST "<your-resource-endpoint>/documentintelligence/documentModels:authorizeCopy?api-version=2024-02-29-preview"
-H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <YOUR-KEY>" --data-ascii "{
curl -i -X POST "<your-resource-name>/documentintelligence/documentModels:author
**Request** ```bash
-curl -i -X POST "<your-resource-name>/documentintelligence/documentModels/{modelId}:copyTo?api-version=2024-02-29-preview"
+curl -i -X POST "<your-resource-endpoint>/documentintelligence/documentModels/{modelId}:copyTo?api-version=2024-02-29-preview"
-H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <YOUR-KEY>" --data-ascii "{
curl -i -X POST "<your-resource-name>/documentintelligence/documentModels/{model
```http HTTP/1.1 202 Accepted
-Operation-Location: https://<your-resource-name>.cognitiveservices.azure.com/documentintelligence/operations/{operation-id}?api-version=2024-02-29-preview
+Operation-Location: https://<your-resource-endpoint>.cognitiveservices.azure.com/documentintelligence/operations/{operation-id}?api-version=2024-02-29-preview
``` ### Track copy operation progress
In this guide, you learned how to use the Copy API to back up your custom models
::: moniker range="doc-intel-2.1.0" In this guide, you learned how to use the Copy API to back up your custom models to a secondary Document Intelligence resource. Next, explore the API reference docs to see what else you can do with Document Intelligence.
-* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
+* [REST API reference documentation](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)
::: moniker-end
ai-services Build A Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-classifier.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.0.0'
ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
- ignite-2023 Previously updated : 02/27/2024 Last updated : 05/23/2024 monikerRange: '<=doc-intel-4.0.0'
Congratulations you learned to train a custom model in the Document Intelligence
**Applies to:** ![Document Intelligence v2.1 checkmark](../medi?view=doc-intel-3.0.0&preserve-view=true?view=doc-intel-3.0.0&preserve-view=true)
-When you use the Document Intelligence custom model, you provide your own training data to the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
+When you use the Document Intelligence custom model, you provide your own training data to the [Train Custom Model](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) operation, so that the model can train to your industry-specific forms. Follow this guide to learn how to collect and prepare data to train the model effectively.
You need at least five completed forms of the same type.
If you want to use manually labeled data, upload the *.labels.json* and *.ocr.js
### Organize your data in subfolders (optional)
-By default, the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) API only uses documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/TrainCustomModelAsync) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
+By default, the [Train Custom Model](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) API only uses documents that are located at the root of your storage container. However, you can train with data in subfolders if you specify it in the API call. Normally, the body of the [Train Custom Model](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) call has the following format, where `<SAS URL>` is the Shared access signature URL of your container:
```json {
ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/compose-custom-models.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 05/23/2024
ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/estimate-cost.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 05/23/2024
ai-services Project Share Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/project-share-custom-models.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.0.0'
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
Previously updated : 03/28/2024 Last updated : 05/23/2024 zone_pivot_groups: programming-languages-set-formre
ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/managed-identities-secured-access.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 05/23/2024 monikerRange: '<=doc-intel-4.0.0'
Configure each of the resources to ensure that the resources can communicate wit
* Configure the Document Intelligence Studio to use the newly created Document Intelligence resource by accessing the settings page and selecting the resource.
-* Validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request successfully completes.
+* Ensure and validate that the configuration works by selecting the Read API and analyzing a sample document. If the resource was configured correctly, the request successfully completes.
* Add a training dataset to a container in the Storage account you created.
Configure each of the resources to ensure that the resources can communicate wit
* Select the container with the training dataset you uploaded in the previous step. Ensure that if the training dataset is within a folder, the folder path is set appropriately.
-* If you have the required permissions, the Studio sets the CORS setting required to access the storage account. If you don't have the permissions, you need to ensure that the CORS settings are configured on the Storage account before you can proceed.
+* Ensure that you have the required permissions, the Studio sets the CORS setting required to access the storage account. If you don't have the permissions, you need to make certain that the CORS settings are configured on the Storage account before you can proceed.
-* Validate that the Studio is configured to access your training data, if you can see your documents in the labeling experience, all the required connections are established.
+* Ensure and validate that the Studio is configured to access your training data. If you can see your documents in the labeling experience, all the required connections are established.
You now have a working implementation of all the components needed to build a Document Intelligence solution with the default security model:
You now have a working implementation of all the components needed to build a Do
Next, complete the following steps:
-* Setup managed identity on the Document Intelligence resource.
+* Configure managed identity on the Document Intelligence resource.
* Secure the storage account to restrict traffic from only specific virtual networks and IP addresses. * Configure the Document Intelligence managed identity to communicate with the storage account.
-* Disable public access to the Document Intelligence resource and create a private endpoint to make it accessible from only specific virtual networks and IP addresses.
+* Disable public access to the Document Intelligence resource and create a private endpoint. Your resource is then only accessible from specific virtual networks and IP addresses.
* Add a private endpoint for the storage account in a selected virtual network.
-* Validate that you can train models and analyze documents from within the virtual network.
+* Ensure and validate that you can train models and analyze documents from within the virtual network.
## Setup managed identity for Document Intelligence
Navigate to the Document Intelligence resource in the Azure portal and select th
:::image type="content" source="media/managed-identities/v2-fr-mi.png" alt-text="Screenshot of configure managed identity.":::
-## Secure the Storage account to limit traffic
+## Secure the Storage account
Start configuring secure communications by navigating to the **Networking** tab on your **Storage account** in the Azure portal.
Great! You configured your Document Intelligence resource to use a managed ident
> When you try the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio), you'll see the READ API and other prebuilt models don't require storage access to process documents. However, training a custom model requires additional configuration because the Studio can't directly communicate with a storage account. > You can enable storage access by selecting **Add your client IP address** from the **Networking** tab of the storage account to configure your machine to access the storage account via IP allowlisting.
-## Configure private endpoints for access from VNETs
+## Configure private endpoints for access from `VNET`s
> [!NOTE] >
To validate your deployment, you can deploy a virtual machine (VM) to the virtua
1. Configure a [Data Science VM](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview) in the virtual network.
-1. Remotely connect into the VM from your desktop to launch a browser session to access Document Intelligence Studio.
+1. Remotely connect into the VM from your desktop and launch a browser session that accesses Document Intelligence Studio.
1. Analyze requests and the training operations should now work successfully.
That's it! You can now configure secure access for your Document Intelligence re
:::image type="content" source="media/managed-identities/cors-error.png" alt-text="Screenshot of error message when CORS config is required"::: **Resolution**:
- 1. [Configure CORS](quickstarts/try-document-intelligence-studio.md#prerequisites-for-new-users).
+ 1. [Configure CORS](quickstarts/try-document-intelligence-studio.md#configure-cors).
- 1. Make sure the client computer can access Document Intelligence resource and storage account, either they are in the same VNET, or client IP address is allowed in **Networking > Firewalls and virtual networks** setting page of both Document Intelligence resource and storage account.
+ 1. Make sure the client computer can access Document Intelligence resource and storage account, either they are in the same `VNET`, or client IP address is allowed in **Networking > Firewalls and virtual networks** setting page of both Document Intelligence resource and storage account.
* **AuthorizationFailure**: :::image type="content" source="media/managed-identities/auth-failure.png" alt-text="Screenshot of authorization failure error.":::
- **Resolution**: Make sure the client computer can access Document Intelligence resource and storage account, either they are in the same VNET, or client IP address is allowed in **Networking > Firewalls and virtual networks** setting page of both Document Intelligence resource and storage account.
+ **Resolution**: Make sure the client computer can access Document Intelligence resource and storage account, either they are in the same `VNET`, or client IP address is allowed in **Networking > Firewalls and virtual networks** setting page of both Document Intelligence resource and storage account.
* **ContentSourceNotAccessible**:
That's it! You can now configure secure access for your Document Intelligence re
:::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of an access denied error.":::
- **Resolution**: Check to make sure there's connectivity between the computer accessing the Document Intelligence Studio and the Document Intelligence service. For example, you might need to add the client IP address to the Document Intelligence service's networking tab.
+ **Resolution**: Check to make sure there's connectivity between the computer accessing the Document Intelligence Studio and the Document Intelligence service. For example, you might need to allow the client IP address in **Networking > Firewalls and virtual networks** setting page of both Document Intelligence resource and storage account.
## Next steps
ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/managed-identities.md
Title: Create and use managed identities with Document Intelligence (formerly Form Recognizer)
-description: Understand how to create and use managed identity with Document Intelligence
+description: Understand how to create and use managed identity with Document Intelligence.
- ignite-2023 Previously updated : 07/18/2023 Last updated : 05/23/2024 monikerRange: '<=doc-intel-4.0.0'
Managed identities for Azure resources are service principals that create a Micr
:::image type="content" source="media/managed-identities/rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-* You can use managed identities to grant access to any resource that supports Microsoft Entra authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
+* Managed identities grant access to any resource that supports Microsoft Entra authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
-* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-* There's no added cost to use managed identities in Azure.
+* You can grant access to an Azure resource and assign an Azure role to a managed identity using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). There's no added cost to use managed identities in Azure.
> [!IMPORTANT] >
Managed identities for Azure resources are service principals that create a Micr
## Private storage account access
- Private Azure storage account access and authentication support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Document Intelligence can't directly access your storage account data. However, once a managed identity is enabled, Document Intelligence can access your storage account using an assigned managed identity credential.
+ Private Azure storage account access and authentication support [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (`VNet`) or firewall, Document Intelligence can't directly access your storage account data. However, once a managed identity is enabled, Document Intelligence can access your storage account using an assigned managed identity credential.
> [!NOTE] > > * If you intend to analyze your storage data with the [**Document Intelligence Sample Labeling tool (FOTT)**](https://fott-2-1.azurewebsites.net/), you must deploy the tool behind your VNet or firewall. >
-> * The Analyze [**Receipt**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeReceiptAsync), [**Business Card**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync), [**Invoice**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291), [**ID document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5f74a7738978e467c5fb8707), and [**Custom Form**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) APIs can extract data from a single document by posting requests as raw binary content. In these scenarios, there is no requirement for a managed identity credential.
+> * The `Analyze` [**Receipt**](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true), [**Business Card**](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true), [**Invoice**](/rest/api/aiservices/operation-groups?view=rest-aiservices-v2.1&preserve-view=true), [**ID document**](/rest/api/aiservices/operation-groups?view=rest-aiservices-v2.1&preserve-view=true), and [**Custom Form**](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) APIs can extract data from a single document by posting requests as raw binary content. In these scenarios, there is no requirement for a managed identity credential.
## Prerequisites
To get started, you need:
* In the main window, select **Allow access from selected networks**. :::image type="content" source="media/managed-identities/firewalls-and-virtual-networks.png" alt-text="Screenshot of Selected networks radio button selected.":::
- * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**Allow Azure services on the trusted services list to access this storage account**](../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
+ * On the selected networks page, navigate to the **Exceptions** category and make certain that the [**`Allow Azure services on the trusted services list to access this storage account`**](../../storage/common/storage-network-security.md?tabs=azure-portal#manage-exceptions) checkbox is enabled.
:::image type="content" source="media/managed-identities/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot of allow trusted services checkbox, portal view":::
-* A brief understanding of [**Azure role-based access control (Azure RBAC)**](../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
+* A brief understanding of [**Azure role-based access control (Azure RBAC)**](../../role-based-access-control/role-assignments-portal.yml) using the Azure portal.
## Managed identity assignments
In the following steps, we enable a system-assigned managed identity and grant D
## Grant access to your storage account
-You need to grant Document Intelligence access to your storage account before it can read blobs. Now that you've enabled Document Intelligence with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Document Intelligence access to Azure storage. The **Storage Blob Data Reader** role gives Document Intelligence (represented by the system-assigned managed identity) read and list access to the blob container and data.
+You need to grant Document Intelligence access to your storage account before it can read blobs. Now that Document Intelligence access is enabled with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Document Intelligence access to Azure storage. The **Storage Blob Data Reader** role gives Document Intelligence (represented by the system-assigned managed identity) read and list access to the blob container and data.
1. Under **Permissions** select **Azure role assignments**:
You need to grant Document Intelligence access to your storage account before it
> > If you're unable to assign a role in the Azure portal because the Add > Add role assignment option is disabled or you get the permissions error, "you do not have permissions to add role assignment at this scope", check that you're currently signed in as a user with an assigned a role that has Microsoft.Authorization/roleAssignments/write permissions such as Owner or User Access Administrator at the Storage scope for the storage resource.
-1. Next, you're going to assign a **Storage Blob Data Reader** role to your Document Intelligence service resource. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+1. Next, you're going to assign a **Storage Blob Data Reader** role to your Document Intelligence service resource. In the **`Add role assignment`** pop-up window, complete the fields as follows and select **Save**:
| Field | Value| ||--|
You need to grant Document Intelligence access to your storage account before it
:::image type="content" source="media/managed-identities/add-role-assignment-window.png" alt-text="Screenshot of add role assignments page in the Azure portal.":::
-1. After you've received the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
+1. After you receive the _Added Role assignment_ confirmation message, refresh the page to see the added role assignment.
:::image type="content" source="media/managed-identities/add-role-assignment-confirmation.png" alt-text="Screenshot of Added role assignment confirmation pop-up message.":::
You need to grant Document Intelligence access to your storage account before it
:::image type="content" source="media/managed-identities/assigned-roles-window.png" alt-text="Screenshot of Azure role assignments window.":::
- That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Document Intelligence specific access rights to your storage resource without having to manage credentials such as SAS tokens.
+ That's it! You completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Document Intelligence specific access rights to your storage resource without having to manage credentials such as SAS tokens.
+
+### Other role assignments for Document Intelligence Studio
-### Additional role assignment for Document Intelligence Studio
+If you're going to use Document Intelligence Studio and your storage account is configured with network restriction such as firewall or virtual network, another role, **Storage Blob Data Contributor**, needs to be assigned to your Document Intelligence service. Document Intelligence Studio requires this role to write blobs to your storage account when you perform Auto label, Human in the loop, or Project sharing/upgrade operations.
-If you are going to use Document Intelligence Studio and your storage account is configured with network restriction such as firewall or virtual network, an additional role, **Storage Blob Data Contributor**, needs to be assigned to your Document Intelligence service. Document Intelligence Studio requires this role to write blobs to your storage account when you perform Auto label, OCR upgrade, Human in the loop, or Project sharing operations.
+ :::image type="content" source="media/managed-identities/blob-data-contributor-role.png" alt-text="Screenshot of assigning storage blob data contributor role.":::
## Next steps > [!div class="nextstepaction"]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
- ignite-2023 Previously updated : 02/29/2024 Last updated : 05/07/2024 monikerRange: '<=doc-intel-4.0.0'
Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-serv
## Document analysis models
-Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or progress.
+Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or development.
+ :::moniker range="doc-intel-4.0.0" :::row::: :::column:::
Prebuilt models enable you to add intelligent document processing to your apps a
:::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
- [**Invoice**](#invoice) | Extract customer </br>and vendor details.
+ [**Invoice**](#invoice) | Extract customer and vendor details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
- [**Receipt**](#receipt) | Extract sales </br>transaction details.
+ [**Receipt**](#receipt) | Extract sales transaction details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
- [**Identity**](#identity-id) | Extract identification </br>and verification details.
+ [**Identity**](#identity-id) | Extract verification details.
:::column-end::: :::row-end::: :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
- [**1003 EULA**](#invoice) | Extract mortgage details.
+ :::image type="icon" source="media/overview/icon-mortgage-1003.png" link="#invoice":::</br>
+ [**US mortgage 1003**](#us-mortgage-1003-form) | Extract loan application details.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
- [**Form 1008**](#receipt) | Extract mortgage details.
+ :::image type="icon" source="media/overview/icon-mortgage-1008.png" link="#receipt":::</br>
+ [**US mortgage 1008**](#us-mortgage-1008-form) | Extract loan transmittal details.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
- [**Closing Disclosure**](#identity-id) | Extract mortgage details.
+ :::image type="icon" source="media/overview/icon-mortgage-disclosure.png" link="#identity-id":::</br>
+ [**US mortgage disclosure**](#us-mortgage-disclosure-form) | Extract final closing loan terms.
:::column-end::: :::row-end::: :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>
- [**Health Insurance card**](#health-insurance-card) | Extract health </br>insurance details.
+ [**Health Insurance card**](#health-insurance-card) | Extract insurance coverage details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
- [**Contract**](#contract-model) | Extract agreement</br> and party details.
+ [**Contract**](#contract-model) | Extract agreement and party details.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
- [**Credit/Debit card**](#contract-model) | Extract information from bank cards.
+ :::image type="icon" source="media/overview/icon-payment-card.png" link="#contract-model":::</br>
+ [**Credit/Debit card**](#credit-card-model) | Extract payment card information.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
- [**Marriage Certificate**](#contract-model) | Extract information from Marriage certificates.
+ :::image type="icon" source="media/overview/icon-marriage-certificate.png" link="#contract-model":::</br>
+ [**Marriage certificate**](#marriage-certificate-model) | Extract certified marriage information.
:::column-end::: :::row-end::: :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br>
- [**US Tax W-2 form**](#us-tax-w-2-model) | Extract taxable </br>compensation details.
+ [**US Tax W-2 form**](#us-tax-w-2-model) | Extract taxable compensation details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br> [**US Tax 1098 form**](#us-tax-1098-form) | Extract mortgage interest details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098e.png" link="#us-tax-1098-e-form":::</br>
+ :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br>
[**US Tax 1098-E form**](#us-tax-1098-e-form) | Extract student loan interest details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form":::</br>
+ :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br>
[**US Tax 1098-T form**](#us-tax-1098-t-form) | Extract qualified tuition details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1099 form**](concept-tax-document.md#field-extraction-1099-nec) | Extract information from variations of the 1099 form.
+ :::image type="icon" source="media/overview/icon-1099.png" link="#us-tax-1098-t-form":::</br>
+ [**US Tax 1099 form**](#us-tax-1099-and-variations-form) | Extract form 1099 variation details.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1040 form**](concept-tax-document.md#field-extraction-1099-nec) | Extract information from variations of the 1040 form.
+ :::image type="icon" source="media/overview/icon-1040.png" link="#us-tax-1098-t-form":::</br>
+ [**US Tax 1040 form**](#us-tax-1040-form) | Extract form 1040 variation details.
:::column-end::: :::row-end::: :::moniker-end
Prebuilt models enable you to add intelligent document processing to your apps a
[**US Tax 1098 form**](#us-tax-1098-form) | Extract mortgage interest details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098e.png" link="#us-tax-1098-e-form":::</br>
+ :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br>
[**US Tax 1098-E form**](#us-tax-1098-e-form) | Extract student loan interest details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098t.png" link="#us-tax-1098-t-form":::</br>
+ :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br>
[**US Tax 1098-T form**](#us-tax-1098-t-form) | Extract qualified tuition details. :::column-end::: :::row-end:::
Document Intelligence supports optional features that can be enabled and disable
* [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction)
-Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities areavailable for`2024-02-29-preview`, `2023-10-31-preview`, and later releases:
+Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2024-02-29-preview`, `2023-10-31-preview`, and later releases:
* [`queryFields`](concept-add-on-capabilities.md#query-fields)
You can use Document Intelligence to automate document processing in application
### General document (deprecated in 2023-10-31-preview) | Model ID | Description |Automation use cases | Development options | |-|--|-|--|
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-invoice**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-invoice**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=invoice&formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-receipt**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-receipt**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=receipt&formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) ### Identity (ID) +
+| Model ID | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**prebuilt-idDocument**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#supported-document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=idDocument&formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### US mortgage 1003 form
++
+| Model ID | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**prebuilt-mortgage.us.1003**](concept-mortgage-documents.md)|&#9679; Extract key information from `1003` loan applications. </br>&#9679; [Data and field extraction](concept-mortgage-documents.md#field-extraction-1003-uniform-residential-loan-application-urla)|&#9679; Fannie Mae and Freddie Mac documentation requirements.| &#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=mortgage.us.1003&formType=mortgage.us.1003)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### US mortgage 1008 form
+ | Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-idDocument**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#supported-document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-mortgage.us.1008**](concept-mortgage-documents.md)|&#9679; Extract key information from Uniform Underwriting and Transmittal Summary. </br>&#9679; [Data and field extraction](concept-mortgage-documents.md#field-extraction-1008-uniform-underwriting-and-transmittal-summary)|&#9679; Loan underwriting processing using summary data.| &#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=mortgage.us.1008&formType=mortgage.us.1008)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### US mortgage disclosure form
++
+| Model ID | Description |Automation use cases | Development options |
+|-|--|-|--|
+|[**prebuilt-mortgage.us.closingDisclosure**](concept-mortgage-documents.md)|&#9679; Extract key information from Uniform Underwriting and Transmittal Summary. </br>&#9679; [Data and field extraction](concept-mortgage-documents.md#field-extraction-mortgage-closing-disclosure)|&#9679; Mortgage loan final details requirements.| &#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=mortgage.us.closingDisclosure&formType=mortgage.us.closingDisclosure)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-| [**prebuilt-healthInsuranceCard.us**](concept-health-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-health-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+| [**prebuilt-healthInsuranceCard.us**](concept-health-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-health-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=healthInsuranceCard.us&formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description| Development options | |-|--|-|
-|**prebuilt-contract**|Extract contract agreement and party details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**prebuilt-contract**](concept-contract.md)|Extract contract agreement and party details.</br>&#9679; [Data and field extraction](concept-contract.md#field-extraction)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=contract&formType=contract)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+
+> [!div class="nextstepaction"]
+> [Return to model types](#prebuilt-models)
+
+### Credit card model
++
+| Model ID | Description| Development options |
+|-|--|-|
+|[**prebuilt-creditCard**](concept-credit-card.md)|Extract contract agreement and party details. </br>&#9679; [Data and field extraction](concept-credit-card.md#field-extraction)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=contract&formType=contract)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
+### Marriage certificate model
++
+| Model ID | Description| Development options |
+|-|--|-|
+|[**prebuilt-marriageCertificate.us**](concept-marriage-certificate.md)|Extract contract agreement and party details. </br>&#9679; [Data and field extraction](concept-marriage-certificate.md#field-extraction)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=marriageCertificate.us&formType=marriageCertificate.us)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+ ### US Tax W-2 model :::image type="content" source="media/overview/analyze-w2.png" alt-text="Screenshot of W-2 model analysis using Document Intelligence Studio."::: | Model ID| Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-tax.us.W-2**](concept-w2.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-w2.md#field-extraction)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model) |
+|[**prebuilt-tax.us.W-2**](concept-tax-document.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-w-2)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.w2&formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model) |
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description| Development options | |-|--|-|
-|**prebuilt-tax.us.1098**|Extract mortgage interest information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**prebuilt-tax.us.1098**](concept-tax-document.md)|Extract mortgage interest information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Development options | |-|--|-|
-|**prebuilt-tax.us.1098E**|Extract student loan information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**prebuilt-tax.us.1098E**](concept-tax-document.md)|Extract student loan information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098E)</br>&#9679; </br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1098T**|Extract tuition information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**prebuilt-tax.us.1098T**](concept-tax-document.md)|Extract tuition information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098T)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
-### US tax 1099 (and Variations) form
+### US tax 1099 (and variations) form
:::image type="content" source="media/overview/analyze-1099.png" alt-text="Screenshot of US 1099 tax form analyzed in the Document Intelligence Studio." lightbox="media/overview/analyze-1099.png"::: | Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1099(Variations)**|Extract information from 1099-form variations.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence)|
+|[**prebuilt-tax.us.1099{`variation`}**](concept-tax-document.md)|Extract information from 1099-form variations.|&#9679; </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1099-nec) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1099)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
+### US tax 1040 form
++
+| Model ID |Description|Development options |
+|-|--|--|
+|**prebuilt-tax.us.1040**|Extract information from 1040-form variations.|&#9679; </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1040-tax-form) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+ ::: moniker range="<=doc-intel-3.1.0" ### Business card
You can use Document Intelligence to automate document processing in application
#### Custom classification model | About | Description |Automation use cases | Development options | |-|--|-|--|
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
In this quickstart, you used a document Intelligence model to analyze various fo
::: moniker-end
+* [**Find more samples on GitHub**](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/main).
+
+* [**Find more samples on GitHub**](https://github.com/Azure-Samples/document-intelligence-code-samples/tree/v3.1(2023-07-31-GA)).
+ ::: moniker range="doc-intel-2.1.0" [!INCLUDE [applies to v2.1](../includes/applies-to-v21.md)] ::: moniker-end
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 05/23/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
## Prerequisites for new users
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
* A [**Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource. > [!TIP]
-> Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Currently [Microsoft Entra authentication](../../../active-directory/authentication/overview-authentication.md) is not supported on Document Intelligence Studio to access Document Intelligence service APIs. To use Document Intelligence Studio, enabling access key-based authentication/local authentication is necessary.
+> Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Please note that you'll need a single-service resource if you intend to use [Microsoft Entra authentication](../../../active-directory/authentication/overview-authentication.md).
+>
+> Document Intelligence now supports AAD token authentication additional to local (key-based) authentication when accessing the Document Intelligence resources and storage accounts. Be sure to follow below instructions to setup correct access roles, especially if your resources are applied with `DisableLocalAuth` policy.
#### Azure role assignments For document analysis and prebuilt models, following role assignments are required for different scenarios.+ * Basic * **Cognitive Services User**: you need this role to Document Intelligence or Azure AI services resource to enter the analyze page. * Advanced * **Contributor**: you need this role to create resource group, Document Intelligence service, or Azure AI services resource.
+For more information on authorization, *see* [Document Intelligence Studio authorization policies](../studio-overview.md#authorization-policies).
+
+> [!NOTE]
+> If local (key-based) authentication is disabled for your Document Intelligence service resource, be sure to obtain **Cognitive Services User** role and your AAD token will be used to authenticate requests on Document Intelligence Studio. The **Contributor** role only allows you to list keys but does not give you permission to use the resource when key-access is disabled.
+ ## Models
-Prebuilt models help you add Document Intelligence features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. Document Intelligence currently supports the following prebuilt models:
+Prebuilt models help you add Document Intelligence features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the `analyze` operation depends on the type of document to be analyzed. Document Intelligence currently supports the following prebuilt models:
#### Document analysis
Prebuilt models help you add Document Intelligence features to your apps without
* [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
-* [**Health insurance card**](https://formrecognizer.appliedai.azure.com/studio): extract insurer, member, prescription, group number and other key information from US health insurance cards.
+* [**Health insurance card**](https://formrecognizer.appliedai.azure.com/studio): extract insurer, member, prescription, group number, and other key information from US health insurance cards.
* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms. * [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports.
Prebuilt models help you add Document Intelligence features to your apps without
* [**Custom extraction models**](https://formrecognizer.appliedai.azure.com/studio): extract information from forms and documents with custom extraction models. Quickly train a model by labeling as few as five sample documents. * [**Custom classification model**](https://formrecognizer.appliedai.azure.com/studio): train a custom classifier to distinguish between the different document types within your applications. Quickly train a model with as few as two classes and five samples per class.
-After you've completed the prerequisites, navigate to [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/).
-
-1. Select a Document Intelligence service feature from the Studio home page.
+After you completed the prerequisites, navigate to [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/).
-1. This step is a one-time process unless you've already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
+1. Select a Document Intelligence service feature from the Studio home page. This step is a one-time process unless you already selected the service resource from prior use. Select your Azure subscription, resource group, and resource. (You can change the resources anytime in "Settings" in the top menu.) Review and confirm your selections.
1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
After you've completed the prerequisites, navigate to [Document Intelligence Stu
1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details.
-1. In the output section's Result tab, browse the JSON output to understand the service response format.
+1. Select the output section's Result tab and browse the JSON output to understand the service response format.
-1. In the Code tab, browse the sample code for integration. Copy and download to get started.
+1. Select the Code tab and browse the sample code for integration. Copy and download to get started.
## Added prerequisites for custom projects
For custom projects, the following role assignments are required for different s
* **Cognitive Services User**: You need this role for Document Intelligence or Azure AI services resource to train the custom model or do analysis with trained models. * **Storage Blob Data Contributor**: You need this role for the Storage Account to create a project and label data. * Advanced
- * **Storage Account Contributor**: You need this role for the Storage Account to set up CORS settings (this is a one-time effort if the same storage account is reused).
+ * **Storage Account Contributor**: You need this role for the Storage Account to set up CORS settings (this action is a one-time effort if the same storage account is reused).
* **Contributor**: You need this role to create a resource group and resources. ### Configure CORS
CORS should now be configured to use the storage account from Document Intellige
1. The **Upload blob** window appears.
-1. Select your file(s) to upload.
+1. Select your files to upload.
:::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot of upload blob window in the Azure portal.":::
To create custom models, you start with configuring your project:
1. Review and submit your settings to create the project.
-1. To quickstart the labeling process, use the auto label feature to label using already trained model or one of our prebuilt models.
+1. Use the auto label feature to label using already trained model or one of our prebuilt models.
1. For manual labeling from scratch, define the labels and their types that you're interested in extracting.
To label for signature detection: (Custom form only)
## Next steps * Follow our [**Document Intelligence v3.1 migration guide**](../v3-1-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs.
+* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new client libraries.
* Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API. [Get started with the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com).
ai-services Sdk Overview V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v2-1.md
- devx-track-python - ignite-2023 Previously updated : 11/29/2023 Last updated : 05/23/2024 monikerRange: 'doc-intel-2.1.0'
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version | Package| Supported API version| Platform support | |:-:|:-|:-| :-|
-| [.NET/C# → 3.1.x (GA)](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 3.1.x (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/3.1.1/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/3.1.1) |[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/3.1.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/3.1.0)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.1.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.1.0/)|[v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+| [.NET/C# → 3.1.x (GA)](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v2.1](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 3.1.x (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/3.1.1/https://docsupdatetracker.net/index.html) |[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/3.1.1) |[v2.1](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)|[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/3.1.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/3.1.0)|[v2.1](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)| [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.1.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.1.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.1.0/)|[v2.1](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)
+|[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+
+For more information on other SDK versions, see:
+
+* [`2024-02-29` (preview)](sdk-overview-v4-0.md)
+* [`2023-07-31` v3.1 (GA)](sdk-overview-v3-1.md)
+* [`2022-08-31` v3.0 (GA)](sdk-overview-v3-0.md)
## Supported Clients
const { FormRecognizerClient, AzureKeyCredential } = require("@azure/ai-form-rec
### 3. Set up authentication
-There are two supported methods for authentication
+There are two supported methods for authentication.
* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.ide
var client = new FormRecognizerClient(new Uri(endpoint), new DefaultAzureCredential()); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client).
### [Java](#tab/java)
Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.i
.buildClient(); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
### [JavaScript](#tab/javascript)
Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-ide
) ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
- ### 4. Build your application Create a client object to interact with the Document Intelligence SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) in a language of your choice. ## Help options
-The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+The [Microsoft Q & A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
## Next steps >[!div class="nextstepaction"]
-> [**Explore Document Intelligence REST API v2.1**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)
+> [**Explore Document Intelligence REST API v2.1**](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)
> [!div class="nextstepaction"] > [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)
ai-services Sdk Overview V3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md
- devx-track-python - ignite-2023 Previously updated : 11/21/2023 Last updated : 05/23/2024 monikerRange: 'doc-intel-3.0.0'
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version | Package| Supported API version| Platform support | |:-:|:-|:-| :-|
-| [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.6) |[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+| [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](/rest/api/aiservices/operation-groups?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.6) |[v3.0](/rest/api/aiservices/operation-groups?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](/rest/api/aiservices/operation-groups?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)| [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](/rest/api/aiservices/operation-groups?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+
+For more information on other SDK versions, see:
+
+* [`2024-02-29` v4.0 (preview)](sdk-overview-v4-0.md)
+* [`2023-07-31` v3.1 (GA)](sdk-overview-v3-1.md)
+
+* [`v2.1` (GA)](sdk-overview-v2-1.md)
## Supported Clients
from azure.core.credentials import AzureKeyCredential
### 3. Set up authentication
-There are two supported methods for authentication
+There are two supported methods for authentication:
* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.ide
var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential()); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client).
### [Java](#tab/java)
Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.i
.buildClient(); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
### [JavaScript](#tab/javascript)
Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-ide
) ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
Create a client object to interact with the Document Intelligence SDK, and then
## Help options
-The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+The [Microsoft Q & A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
## Next steps >[!div class="nextstepaction"]
-> [**Explore Document Intelligence REST API v3.0**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+> [**Explore Document Intelligence REST API v3.0**](/rest/api/aiservices/operation-groups?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
> [!div class="nextstepaction"] > [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
ai-services Sdk Overview V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md
Title: Document Intelligence (formerly Form Recognizer) SDK target REST API 2023-07-31 (GA) latest.
-description: The Document Intelligence 2023-07-31 (GA) software development kits (SDKs) expose Document Intelligence models, features and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
+description: The Document Intelligence 2023-07-31 (GA) software development kits (SDKs) expose Document Intelligence models, features, and capabilities that are in active development for C#, Java, JavaScript, or Python programming language.
- devx-track-python - ignite-2023 Previously updated : 11/21/2023 Last updated : 05/06/2024 monikerRange: 'doc-intel-3.1.0'
monikerRange: 'doc-intel-3.1.0'
<!-- markdownlint-disable MD001 --> <!-- markdownlint-disable MD051 -->
-# SDK target: REST API 2023-07-31 (GA) latest
+# SDK target: REST API 2023-07-31 (GA)
![Document Intelligence checkmark](media/yes-icon.png) **REST API version 2023-07-31 (GA)**
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support | |:-:|:-|:-| :-:|
-| [**.NET/C# → latest (GA)**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
-|[**Java → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[**JavaScript → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[**Python → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [&bullet; 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [&bullet; v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[&bullet; v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+| [**.NET/C# → latest (GA)**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**Java → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/java/openjdk/install)|
+|[**JavaScript → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[**Python → latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+
+For more information on other SDK versions, see:
+
+* [`2024-02-29` v4.0 (preview)](sdk-overview-v4-0.md)
+
+* [`2022-08-31` v3.0 (GA)](sdk-overview-v3-0.md)
+* [`v2.1` (GA)](sdk-overview-v2-1.md)
## Supported Clients
from azure.core.credentials import AzureKeyCredential
### 3. Set up authentication
-There are two supported methods for authentication
+There are two supported methods for authentication:
* Use a [Document Intelligence API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.ide
var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential()); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client).
### [Java](#tab/java)
Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.i
.buildClient(); ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
### [JavaScript](#tab/javascript)
Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-ide
) ```
-For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client).
Create a client object to interact with the Document Intelligence SDK, and then
## Help options
-The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+The [Microsoft Q & A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure AI Document Intelligence and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
## Next steps
ai-services Sdk Overview V4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v4-0.md
- devx-track-python - ignite-2023 Previously updated : 03/20/2024 Last updated : 05/06/2024 monikerRange: 'doc-intel-4.0.0'
Document Intelligence SDK supports the following languages and platforms:
| Language → Document Intelligence SDK version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Package| Supported API version &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;| Platform support | |:-:|:-|:-| :-:|
-| [**.NET/C# → 1.0.0-beta.2 (preview)**](/dotnet/api/overview/azure/ai.documentintelligence-readme?view=azure-dotnet-preview&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.DocumentIntelligence/1.0.0-beta.2)|&bullet; [2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [2023-10-31 &(preview)](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>&bullet; [v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
- |[**Java → 1.0.0-beta.2 (preview)**](/java/api/overview/azure/ai-documentintelligence-readme?view=azure-java-preview&preserve-view=true) |[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-documentintelligence/1.0.0-beta.2) |&bullet; [2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [2023-10-31 &(preview)](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>&bullet; [v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)|
-|[**JavaScript → 1.0.0-beta.2 (preview)**](/javascript/api/overview/azure/ai-document-intelligence-rest-readme?view=azure-node-preview&preserve-view=true)| [npm](https://www.npmjs.com/package/@azure-rest/ai-document-intelligence)|&bullet; [2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [2023-10-31 &(preview)](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>&bullet; [v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
-|[**Python → 1.0.0b2 (preview)**](/python/api/overview/azure/ai-documentintelligence-readme?view=azure-python-preview&preserve-view=true) | [PyPI](https://pypi.org/project/azure-ai-documentintelligence/)|&bullet; [2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [2023-10-31 &(preview)](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> &bullet; [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>&bullet; [v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+| [**.NET/C# → 1.0.0-beta.2 (preview)**](/dotnet/api/overview/azure/ai.documentintelligence-readme?view=azure-dotnet-preview&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.DocumentIntelligence/1.0.0-beta.2)|[2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|
+ |[**Java → 1.0.0-beta.2 (preview)**](/java/api/overview/azure/ai-documentintelligence-readme?view=azure-java-preview&preserve-view=true) |[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-documentintelligence/1.0.0-beta.2) |[2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/java/openjdk/install)|
+|[**JavaScript → 1.0.0-beta.2 (preview)**](/javascript/api/overview/azure/ai-document-intelligence-rest-readme?view=azure-node-preview&preserve-view=true)| [npm](https://www.npmjs.com/package/@azure-rest/ai-document-intelligence)|[2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)| [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[**Python → 1.0.0b2 (preview)**](/python/api/overview/azure/ai-documentintelligence-readme?view=azure-python-preview&preserve-view=true) | [PyPI](https://pypi.org/project/azure-ai-documentintelligence/)|[2024-02-29 (preview)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2024-02-29-preview&preserve-view=true&tabs=HTTP)|[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)|
+
+For more information on other SDK versions, see:
+
+* [`2023-07-31` v3.1 (GA)](sdk-overview-v3-1.md)
+* [`2022-08-31` v3.0 (GA)](sdk-overview-v3-0.md)
+* [`v2.1` (GA)](sdk-overview-v2-1.md)
## Supported Clients
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
This article contains both a quick reference and detailed description of Azure A
## Model usage
-|Document types supported|Read|Layout|Prebuilt models|Custom models|
-|--|--|--|--|--|
-| PDF | ✔️ | ✔️ | ✔️ | ✔️ |
-| Images (JPEG/JPG), PNG, BMP, TIFF, HEIF | ✔️ | ✔️ | ✔️ | ✔️ |
-| Office file types DOCX, PPTX, XLS | ✔️ | ✖️ | ✖️ | ✖️ |
+|Document types supported|Read|Layout|Prebuilt models|Custom models|Add-on capabilities|
+|--|--|--|--|--|-|
+| PDF | ✔️ | ✔️ | ✔️ | ✔️ |✔️|
+| Images: `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF` | ✔️ | ✔️ | ✔️ | ✔️ |✔️|
+| Microsoft Office: `DOCX`, `PPTX`, `XLS` | ✔️ | ✔️ | ✖️ | ✖️ |✖️|
+
+✔️ = supported
+✖️ = Not supported
:::moniker-end |Document types supported|Read|Layout|Prebuilt models|Custom models| |--|--|--|--|--| | PDF | ✔️ | ✔️ | ✔️ | ✔️ |
-| Images (JPEG/JPG), PNG, BMP, TIFF, HEIF | ✔️ | ✔️ | ✔️ | ✔️ |
-| Office file types DOCX, PPTX, XLS | ✔️ | ✔️ | ✖️ | ✖️ |
+| Images: `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF` | ✔️ | ✔️ | ✔️ | ✔️ |
+| Microsoft Office: `DOCX`, `PPTX`, `XLS` | ✔️ | ✖️ | ✖️ | ✖️ |
+
+✔️ = supported
+✖️ = Not supported
:::moniker-end ::: moniker range=">=doc-intel-3.0.0"
This article contains both a quick reference and detailed description of Azure A
## Detailed description, Quota adjustment, and best practices
-Before requesting a quota increase (where applicable), ensure that it's necessary. Document Intelligence service uses autoscaling to bring the required computational resources in "on-demand" and at the same time to keep the customer costs low, deprovision unused resources by not maintaining an excessive amount of hardware capacity.
+Before requesting a quota increase (where applicable), ensure that it's necessary. Document Intelligence service uses autoscaling to bring the required computational resources `on-demand`, keep the customer costs low, and deprovision unused resources by not maintaining an excessive amount of hardware capacity.
If your application returns Response Code 429 (*Too many requests*) and your workload is within the defined limits: most likely, the service is scaling up to your demand, but has yet to reach the required scale. Thus the service doesn't immediately have enough resources to serve the request. This state is transient and shouldn't last long.
ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/studio-overview.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 05/10/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
[!INCLUDE [applies to v4.0 v3.1 v3.0](includes/applies-to-v40-v31-v30.md)] [Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/) is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. The studio provides a platform for you to experiment with the different Document Intelligence models and sample returned data in an interactive manner without the need to write code. Use the Document Intelligence Studio to:+ * Learn more about the different capabilities in Document Intelligence. * Use your Document Intelligence resource to test models on sample documents or upload your own documents. * Experiment with different add-on and preview features to adapt the output to your needs. * Train custom classification models to classify documents. * Train custom extraction models to extract fields from documents.
-* Get sample code for the language specific SDKs to integrate into your applications.
+* Get sample code for the language specific `SDKs` to integrate into your applications.
The studio supports Document Intelligence v3.0 and later API versions for model analysis and custom model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-## Get started using Document Intelligence Studio
+## Get started
1. To use Document Intelligence Studio, you need the following assets:
The studio supports Document Intelligence v3.0 and later API versions for model
* **Azure AI services or Document Intelligence resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-1. Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. You have two options:
+## Authorization policies
- **a. Access by Resource (recommended)**.
+Your organization can opt to disable local authentication and enforce Microsoft Entra (formerly Azure Active Directory) authentication for Azure AI Document Intelligence resources and Azure blob storage.
- * Choose your existing subscription.
- * Select an existing resource group within your subscription or create a new one.
- * Select your existing Document Intelligence or Azure AI services resource.
+* Using Microsoft Entra authentication requires that key based authorization is disabled. After key access is disabled, Microsoft Entra ID is the only available authorization method.
- **b. Access by API endpoint and key**.
+* Microsoft Entra allows granting minimum privileges and granular control for Azure resources.
- * Retrieve your endpoint and key from the Azure portal.
- * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
- * Enter the values in the appropriate fields.
+* For more information *see* the following guidance:
- :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot of keys and endpoint location in the Azure portal.":::
+ * [Disable local authentication for Azure AI Services](../disable-local-auth.md).
+ * [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md)
-1. Once the resource is configured, you're able to try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+* **Designating role assignments**. Document Intelligence Studio basic access requires the [`Cognitive Services User`](../../role-based-access-control/built-in-roles/ai-machine-learning.md#cognitive-services-user) role. For more information, *see* [Document Intelligence role assignments](quickstarts/try-document-intelligence-studio.md#azure-role-assignments) and [Document Intelligence Studio Permission](faq.yml#what-permissions-do-i-need-to-access-document-intelligence-studio-).
- :::image type="content" source="media/studio/welcome-to-studio.png" alt-text="Screenshot of Document Intelligence Studio front page.":::
+## Authentication
-1. To test any of the document analysis or prebuilt models, select the model and use one o the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
+Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. In accordance with your organization's policy, you have one or two options:
-1. Custom models need to be trained on your documents. See [custom models overview](concept-custom.md) for an overview of custom models.
+* **Microsoft Entra authentication: access by Resource (recommended)**.
+
+ * Choose your existing subscription.
+ * Select an existing resource group within your subscription or create a new one.
+ * Select your existing Document Intelligence or Azure AI services resource.
+
+ :::image type="content" source="media/studio/configure-service-resource.png" alt-text="Screenshot of configure service resource form from the Document Intelligence Studio.":::
+
+* **Local authentication: access by API endpoint and key**.
+
+ * Retrieve your endpoint and key from the Azure portal.
+ * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
+ * Enter the values in the appropriate fields.
-1. After validating the scenario in the Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+ :::image type="content" source="media/studio/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint page in the Azure portal.":::
+
+## Try a Document Intelligence model
+
+1. Once your resource is configured, you can try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+
+1. To test any of the document analysis or prebuilt models, select the model and use one of the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
+
+1. Custom models need to be trained on your documents. See [custom models overview](concept-custom.md) for an overview of custom models.
-To learn more about each model, *see* concept pages.
+1. After validating the scenario in the Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+To learn more about each model, *see* our concept pages.
-### Manage your resource
+### View resource details
To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Document Intelligence Studio home page and select the **Resource** tab. If you have access to other resources, you can switch resources as well.
With Document Intelligence, you can quickly automate your data processing in app
## Next steps
-* Visit [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) to begin using the models presented by the service.
+* To begin using the models presented by the service, visit [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
* For more information on Document Intelligence capabilities, see [Azure AI Document Intelligence overview](overview.md).
ai-services Tutorial Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-azure-function.md
Previously updated : 07/18/2023 Last updated : 05/23/2024
Next, you'll add your own code to the Python script to call the Document Intelli
f"Blob Size: {myblob.length} bytes") ```
-1. The following code block calls the Document Intelligence [Analyze Layout](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeLayoutAsync) API on the uploaded document. Fill in your endpoint and key values.
+1. The following code block calls the Document Intelligence [Analyze Layout](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true) API on the uploaded document. Fill in your endpoint and key values.
```Python # This is the call to the Document Intelligence endpoint
ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-logic-apps.md
- ignite-2023 Previously updated : 08/01/2023- Last updated : 04/24/2024+ zone_pivot_groups: cloud-location monikerRange: '<=doc-intel-4.0.0'
monikerRange: '<=doc-intel-4.0.0'
:::moniker-end
-Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
+Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and your applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
* Create business processes and workflows visually. * Integrate workflows with software as a service (SaaS) and enterprise applications.
Choose a workflow using a file from either your Microsoft OneDrive account or Mi
## Test the automation flow
-Let's quickly review what we've done before we test our flow:
+Let's quickly review what we completed before we test our flow:
> [!div class="checklist"] >
Let's quickly review what we've done before we test our flow:
> * We added a Document Intelligence action to our flow. In this scenario, we decided to use the invoice API to automatically analyze an invoice from the OneDrive folder. > * We added an Outlook.com action to our flow. We sent some of the analyzed invoice data to a pre-determined email address.
-Now that we've created the flow, the last thing to do is to test it and make sure that we're getting the expected behavior.
+Now that we created the flow, the last thing to do is to test it and make sure that we're getting the expected behavior.
1. To test the Logic App, first open a new tab and navigate to the OneDrive folder you set up at the beginning of this tutorial. Add this file to the OneDrive folder [Sample invoice.](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/invoice-logic-apps-tutorial.pdf)
Now that we've created the flow, the last thing to do is to test it and make sur
:::image type="content" source="media/logic-apps-tutorial/disable-delete.png" alt-text="Screenshot of disable and delete buttons.":::
-Congratulations! You've officially completed this tutorial.
+Congratulations! You completed this tutorial.
## Next steps
ai-services V3 1 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 05/23/2024 monikerRange: '<=doc-intel-3.1.0'
POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copyTo?a
## Changes to list models
-List models are extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
+List models are extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](/rest/api/aiservices/miscellaneous/list-operations?view=rest-aiservices-v3.0%20(2022-08-31)&tabs=HTTP&preserve-view=true).
***Sample list models request***
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Previously updated : 02/29/2024 Last updated : 05/23/2024 - references_regions
Document Intelligence service is updated on an ongoing basis. Bookmark this page
> [!IMPORTANT] > Preview API versions are retired once the GA API is released. The 2023-02-28-preview API version is being retired, if you are still using the preview API or the associated SDK versions, please update your code to target the latest API version 2023-07-31 (GA).
+## May 2024
+
+The Document Intelligence Studio has added support for Microsoft Entra (formerly Azure Active Directory) authentication. For more information, *see* [Document Intelligence Studio overview](studio-overview.md#authentication).
+ ## February 2024
-The Document Intelligence [**2024-02-29-preview**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence) REST API is now available. This preview API introduces new and updated capabilities:
+The Document Intelligence [**2024-02-29-preview**](/rest/api/aiservices/document-models?view=rest-aiservices-v4.0%20(2024-02-29-preview)&preserve-view=true) REST API is now available. This preview API introduces new and updated capabilities:
* Public preview version [**2024-02-29-preview**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true) is currently available only in the following Azure regions:
The [Document Intelligence client libraries](sdk-overview-v4-0.md) targeting RES
## November 2023
-The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence) REST API is now available. This preview API introduces new and updated capabilities:
+The Document Intelligence [**2023-10-31-preview**](/rest/api/aiservices/document-models?view=rest-aiservices-v4.0%20(2024-02-29-preview)&preserve-view=true) REST API is now available. This preview API introduces new and updated capabilities:
* Public preview version [**2023-10-31-preview**](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true) is currently only available in the following Azure regions:
The v3.1 API introduces new and updated capabilities:
**Announcing the latest Document Intelligence client-library public preview release**
-* Document Intelligence REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) supports the public preview release client libraries. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) client libraries:
+* Document Intelligence REST API Version **2023-02-28-preview** supports the public preview release client libraries. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) client libraries:
* [**Custom classification model**](concept-custom-classifier.md)
The v3.1 API introduces new and updated capabilities:
## March 2023 > [!IMPORTANT]
-> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions:
+> **`2023-02-28-preview`** capabilities are currently only available in the following regions:
> > * West Europe > * West US2 > * East US
-* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Document Intelligence starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult).
+* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Document Intelligence starting with the ```2023-02-28-preview``` API.
* [**Query fields**](concept-query-fields.md) capabilities added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region. * [**Add-on capabilities**](concept-add-on-capabilities.md): * [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API.
The v3.1 API introduces new and updated capabilities:
* [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models. * [**Language Expansion**](language-support.md) Document Intelligence Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
-* Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
+* Get started with the new v3.0 preview API.
* Document Intelligence model data extraction:
The v3.1 API introduces new and updated capabilities:
* **Document Intelligence `v2.1-preview.1` includes the following features:
- * **REST API reference is available** - View the [`v2.1-preview.1 reference`](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync).
+ * **REST API reference is available** - View the [`v2.1-preview.1 reference`](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true).
* **New languages supported In addition to English**, the following [languages](language-support.md) are now supported: for `Layout` and `Train Custom Model`: English (`en`), Chinese (Simplified) (`zh-Hans`), Dutch (`nl`), French (`fr`), German (`de`), Italian (`it`), Portuguese (`pt`) and Spanish (`es`). * **Checkbox / Selection Mark detection** ΓÇô Document Intelligence supports detection and extraction of selection marks such as check boxes and radio buttons. Selection Marks are extracted in `Layout` and you can now also label and train in `Train Custom Model` - _Train with Labels_ to extract key-value pairs for selection marks. * **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/overview.md
With Immersive Reader, you can break words into syllables to improve readability
Immersive Reader is a standalone web application. When it's invoked, the Immersive Reader client library displays on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+## Data privacy for Immersive reader
+
+Immersive reader doesn't store any customer data.
+ ## Next step The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
ai-services Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/developer-guide.md
It additionally enables you to use the following features, without creating any
* [Conversation summarization](../summarization/quickstart.md?pivots=rest-api&tabs=conversation-summarization) * [Personally Identifiable Information (PII) detection for conversations](../personally-identifiable-information/how-to-call-for-conversations.md?tabs=rest-api#examples)
-As you use this API in your application, see the [reference documentation](/rest/api/language/2023-04-01/conversation-analysis-runtime) for additional information.
+As you use this API in your application, see the [reference documentation](/rest/api/language) for additional information.
### Text analysis authoring API
ai-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/migrate-language-service-latest.md
The key phrase extraction feature functionality currently has not changed outsid
* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md) * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md) * [Text analytics for health](../text-analytics-for-health/quickstart.md)
-
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/role-based-access-control.md
Azure RBAC can be assigned to a Language resource. To grant access to an Azure r
1. On the **Members** tab, select a user, group, service principal, or managed identity. 1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.yml).
## Language role types
A user that should only be validating and reviewing the Language apps, typically
Only Export POST operation under: * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) All the Batch Testing Web APIs
- *[Language Runtime CLU APIs](/rest/api/language/2023-04-01/conversation-analysis-runtime)
+ *[Language Runtime CLU APIs](/rest/api/language)
*[Language Runtime Text Analysis APIs](https://go.microsoft.com/fwlink/?linkid=2239169) :::column-end::: :::row-end:::
ai-services Use Asynchronously https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/use-asynchronously.md
Currently, the following features are available to be used asynchronously:
* Text Analytics for health * Personal Identifiable information (PII)
-When you send asynchronous requests, you will incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you will be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
+When you send asynchronous requests, you'll incur charges based on number of text records you include in your request, for each feature use. For example, if you send a text record for sentiment analysis and NER, it will be counted as sending two text records, and you'll be charged for both according to your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
## Submit an asynchronous job using the REST API
-To submit an asynchronous job, review the [reference documentation](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job) for the JSON body you'll send in your request.
+To submit an asynchronous job, review the [reference documentation](/rest/api/language/analyze-text-submit-job) for the JSON body you'll send in your request.
1. Add your documents to the `analysisInput` object. 1. In the `tasks` object, include the operations you want performed on your data. For example, if you wanted to perform sentiment analysis, you would include the `SentimentAnalysisLROTask` object. 1. You can optionally:
Once you've created the JSON body for your request, add your key to the `Ocp-Api
POST https://your-endpoint.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-05-01 ```
-A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you will use to retrieve the API results. The value will look similar to the following URL:
+A successful call will return a 202 response code. The `operation-location` in the response header will be the URL you'll use to retrieve the API results. The value will look similar to the following URL:
```http GET {Endpoint}/language/analyze-text/jobs/12345678-1234-1234-1234-12345678?api-version=2022-05-01 ```
-To [get the status and retrieve the results](/rest/api/language/2023-04-01/text-analysis-runtime/job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
+To [get the status and retrieve the results](/rest/api/language/analyze-text-job-status) of the request, send a GET request to the URL you received in the `operation-location` header from the previous API response. Remember to include your key in the `Ocp-Apim-Subscription-Key`. The response will include the results of your API call.
## Send asynchronous API requests using the client library
When using this feature asynchronously, the API results are available for 24 hou
## Automatic language detection
-Starting in version `2022-07-01-preview` of the REST API, you can request automatic [language detection](../language-detection/overview.md) on your documents. By setting the `language` parameter to `auto`, the detected language code of the text will be returned as a language value in the response. This language