Updates from: 08/23/2022 01:08:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
Previously updated : 06/28/2022 Last updated : 08/22/2022
OpenID Connect (OIDC) is an authentication protocol that's built on OAuth 2.0. U
When the access token expires or the app session is invalidated, Azure Static Web App initiates a new authentication request and redirects users to Azure AD B2C. If the Azure AD B2C [SSO session](session-behavior.md) is active, Azure AD B2C issues an access token without prompting users to sign in again. If the Azure AD B2C session expires or becomes invalid, users are prompted to sign in again. ## Prerequisites-
+- A premium Azure subscription.
- If you haven't created an app yet, follow the guidance how to create an [Azure Static Web App](../static-web-apps/overview.md). - Familiarize yourself with the Azure Static Web App [staticwebapp.config.json](../static-web-apps/configuration.md) configuration file. - Familiarize yourself with the Azure Static Web App [App Settings](../static-web-apps/application-settings.md).
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Complete the following steps to create a policy that applies to all selected use
1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **+ New policy**. 1. Enter a name for this policy, such as *Combined Security Info Registration on Trusted Networks*.
-1. Under **Assignments**, select **Users and groups**. Choose the users and groups you want this policy to apply to, then select **Done**.
+1. Under **Assignments**, select **Users or workload identities**.. Choose the users and groups you want this policy to apply to, then select **Done**.
> [!WARNING] > Users must be enabled for combined registration.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
Title: Block legacy authentication - Azure Active Directory description: Learn how to improve your security posture by blocking legacy authentication using Azure AD Conditional Access.++ Previously updated : 06/21/2022 Last updated : 08/22/2022+ -++
-# How to: Block legacy authentication access to Azure AD with Conditional Access
+# Block legacy authentication with Azure AD with Conditional Access
-To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support multifactor authentication (MFA). MFA is in many environments a common requirement to address identity theft.
+To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support things like multifactor authentication (MFA). MFA is a common requirement to improve security posture in organizations.
> [!NOTE] > Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. Read more [here](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online)
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020
> - Azure AD accounts in organizations that have disabled legacy authentication experience 67 percent fewer compromises than those where legacy authentication is enabled >
-If your environment is ready to block legacy authentication to improve your tenant's protection, you can accomplish this goal with Conditional Access. This article explains how you can configure Conditional Access policies that block legacy authentication for all workloads within your tenant.
+If you're ready to block legacy authentication to improve your tenant's protection, you can accomplish this goal with Conditional Access. This article explains how you can configure Conditional Access policies that block legacy authentication for all workloads within your tenant.
While rolling out legacy authentication blocking protection, we recommend a phased approach, rather than disabling it for all users all at once. Customers may choose to first begin disabling basic authentication on a per-protocol basis, by applying Exchange Online authentication policies, then (optionally) also blocking legacy authentication via Conditional Access policies when ready.
Many clients that previously only supported legacy authentication now support mo
> > When implementing Exchange Active Sync (EAS) with CBA, configure clients to use modern authentication. Clients not using modern authentication for EAS with CBA **are not blocked** with [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). However, these clients **are blocked** by Conditional Access policies configured to block legacy authentication. >
->For more Information on implementing support for CBA with Azure AD and modern authentication See: [How to configure Azure AD certificate-based authentication (Preview)](../authentication/how-to-certificate-based-authentication.md). As another option, CBA performed at a federation server can be used with modern authentication.
+> For more Information on implementing support for CBA with Azure AD and modern authentication See: [How to configure Azure AD certificate-based authentication (Preview)](../authentication/how-to-certificate-based-authentication.md). As another option, CBA performed at a federation server can be used with modern authentication.
If you're using Microsoft Intune, you might be able to change the authentication type using the email profile you push or deploy to your devices. If you're using iOS devices (iPhones and iPads), you should take a look at [Add e-mail settings for iOS and iPadOS devices in Microsoft Intune](/mem/intune/configuration/email-settings-ios).
The easiest way to block legacy authentication across your entire organization i
### Indirectly blocking legacy authentication
-Even if your organization isn't ready to block legacy authentication across the entire organization, you should ensure that sign-ins using legacy authentication aren't bypassing policies that require grant controls such as requiring multifactor authentication or compliant/hybrid Azure AD joined devices. During authentication, legacy authentication clients don't support sending MFA, device compliance, or join state information to Azure AD. Therefore, apply policies with grant controls to all client applications so that legacy authentication based sign-ins that canΓÇÖt satisfy the grant controls are blocked. With the general availability of the client apps condition in August 2020, newly created Conditional Access policies apply to all client apps by default.
+If your organization isn't ready to block legacy authentication across the entire organization, you should ensure that sign-ins using legacy authentication aren't bypassing policies that require grant controls such as requiring multifactor authentication or compliant/hybrid Azure AD joined devices. During authentication, legacy authentication clients don't support sending MFA, device compliance, or join state information to Azure AD. Therefore, apply policies with grant controls to all client applications so that legacy authentication based sign-ins that canΓÇÖt satisfy the grant controls are blocked. With the general availability of the client apps condition in August 2020, newly created Conditional Access policies apply to all client apps by default.
![Client apps condition default configuration](./media/block-legacy-authentication/client-apps-condition-configured-no.png)
You can select all available grant controls for the **Other clients** condition;
- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md) - If you aren't familiar with configuring Conditional Access policies yet, see [require MFA for specific apps with Azure Active Directory Conditional Access](../authentication/tutorial-enable-azure-mfa.md) for an example. - For more information about modern authentication support, see [How modern authentication works for Office client apps](/office365/enterprise/modern-auth-for-office-2013-and-2016) -- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
+- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
+- [Enable modern authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online)
+- [Enable Modern Authentication for Office 2013 on Windows devices](/office365/admin/security-and-compliance/enable-modern-authentication)
+- [How to configure Exchange Server on-premises to use Hybrid Modern Authentication](/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication)
+- [How to use Modern Authentication with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
# Conditional Access: Filter for devices
-When creating Conditional Access policies, administrators have asked for the ability to target or exclude specific devices in their environment. The condition filter for devices give administrators this capability. Now you can target specific devices using [supported operators and properties for device filters](#supported-operators-and-device-properties-for-filters) and the other available assignment conditions in your Conditional Access policies.
+When creating Conditional Access policies, administrators have asked for the ability to target or exclude specific devices in their environment. The condition filter for devices gives administrators this capability. Now you can target specific devices using [supported operators and properties for device filters](#supported-operators-and-device-properties-for-filters) and the other available assignment conditions in your Conditional Access policies.
:::image type="content" source="media/concept-condition-filters-for-devices/create-filter-for-devices-condition.png" alt-text="Creating a filter for device in Conditional Access policy conditions":::
Policy 1: All users with the directory role of Global administrator, accessing t
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **Directory roles** and choose **Global administrator**. > [!WARNING]
Policy 2: All users with the directory role of Global administrator, accessing t
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **Directory roles** and choose **Global administrator**. > [!WARNING]
active-directory Concept Conditional Access Policy Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md
Previously updated : 11/05/2021 Last updated : 08/22/2022
active-directory Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/controls.md
# Custom controls (preview)
-Custom controls is a preview capability of the Azure Active Directory. When using custom controls, your users are redirected to a compatible service to satisfy authentication requirements outside of Azure Active Directory. To satisfy this control, a user's browser is redirected to the external service, performs any required authentication, and is then redirected back to Azure Active Directory. Azure Active Directory verifies the response and, if the user was successfully authenticated or validated, the user continues in the Conditional Access flow.
+Custom controls are a preview capability of the Azure Active Directory. When using custom controls, your users are redirected to a compatible service to satisfy authentication requirements outside of Azure Active Directory. To satisfy this control, a user's browser is redirected to the external service, performs any required authentication, and is then redirected back to Azure Active Directory. Azure Active Directory verifies the response and, if the user was successfully authenticated or validated, the user continues in the Conditional Access flow.
> [!NOTE] > For more information about changes we are planning to the Custom Control capability, see the February 2020 [Archive for What's new](../fundamentals/whats-new-archive.md#upcoming-changes-to-custom-controls).
Custom controls is a preview capability of the Azure Active Directory. When usin
## Creating custom controls > [!IMPORTANT]
-> Custom controls cannot be used with Identity Protection's automation requiring Azure AD Multi-Factor Authentication, Azure AD self-service password reset (SSPR), satisfying multi-factor authentication claim requirements, to elevate roles in Privileged Identity Manager (PIM), as part of Intune device enrollment, or when joining devices to Azure AD.
+> Custom controls cannot be used with Identity Protection's automation requiring Azure AD Multifactor Authentication, Azure AD self-service password reset (SSPR), satisfying multifactor authentication claim requirements, to elevate roles in Privileged Identity Manager (PIM), as part of Intune device enrollment, or when joining devices to Azure AD.
Custom Controls works with a limited set of approved authentication providers. To create a custom control, you should first contact the provider that you wish to utilize. Each non-Microsoft provider has its own process and requirements to sign up, subscribe, or otherwise become a part of the service, and to indicate that you wish to integrate with Conditional Access. At that point, the provider will provide you with a block of data in JSON format. This data allows the provider and Conditional Access to work together for your tenant, creates the new control and defines how Conditional Access can tell if your users have successfully performed verification with the provider.
-Copy the JSON data and then paste it into the related textbox. Do not make any changes to the JSON unless you explicitly understand the change you're making. Making any change could break the connection between the provider and Microsoft and potentially lock you and your users out of your accounts.
+Copy the JSON data and then paste it into the related textbox. Don't make any changes to the JSON unless you explicitly understand the change you're making. Making any change could break the connection between the provider and Microsoft and potentially lock you and your users out of your accounts.
The option to create a custom control is in the **Manage** section of the **Conditional Access** page.
Clicking **New custom control**, opens a blade with a textbox for the JSON data
To delete a custom control, you must first ensure that it isn't being used in any Conditional Access policy. Once complete: 1. Go to the Custom controls list
-1. Click …
+1. Select …
1. Select **Delete**. ## Editing custom controls
To edit a custom control, you must delete the current control and create a new c
## Known limitations
-Custom controls cannot be used with Identity Protection's automation requiring Azure AD Multi-Factor Authentication, Azure AD self-service password reset (SSPR), satisfying multi-factor authentication claim requirements, to elevate roles in Privileged Identity Manager (PIM), as part of Intune device enrollment, or when joining devices to Azure AD.
+Custom controls can't be used with Identity Protection's automation requiring Azure AD Multifactor Authentication, Azure AD self-service password reset (SSPR), satisfying multifactor authentication claim requirements, to elevate roles in Privileged Identity Manager (PIM), as part of Intune device enrollment, or when joining devices to Azure AD.
## Next steps
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/faqs.md
Title: Azure Active Directory Conditional Access FAQs | Microsoft Docs
+ Title: Azure Active Directory Conditional Access FAQs
description: Get answers to frequently asked questions about Conditional Access in Azure Active Directory. Previously updated : 10/16/2020 Last updated : 08/22/2022
For information about applications that work with Conditional Access policies, s
## Are Conditional Access policies enforced for B2B collaboration and guest users?
-Policies are enforced for business-to-business (B2B) collaboration users. However, in some cases, a user might not be able to satisfy the policy requirements. For example, a guest user's organization might not support multi-factor authentication.
+Policies are enforced for business-to-business (B2B) collaboration users. However, in some cases, a user might not be able to satisfy the policy requirements. For example, a guest user's organization might not support multifactor authentication.
## Does a SharePoint Online policy also apply to OneDrive for Business?
Yes. A SharePoint Online policy also applies to OneDrive for Business. For more
## Why canΓÇÖt I set a policy directly on client apps, like Word or Outlook?
-A Conditional Access policy sets requirements for accessing a service. It's enforced when authentication to that service occurs. The policy is not set directly on a client application. Instead, it is applied when a client calls a service. For example, a policy set on SharePoint applies to clients calling SharePoint. A policy set on Exchange applies to Outlook. For more information, see the article, [Conditional Access service dependencies](service-dependencies.md) and consider targeting policies to the [Office 365 app](concept-conditional-access-cloud-apps.md#office-365) instead.
+A Conditional Access policy sets requirements for accessing a service. It's enforced when authentication to that service occurs. The policy isn't set directly on a client application. Instead, it's applied when a client calls a service. For example, a policy set on SharePoint applies to clients calling SharePoint. A policy set on Exchange applies to Outlook. For more information, see the article, [Conditional Access service dependencies](service-dependencies.md) and consider targeting policies to the [Office 365 app](concept-conditional-access-cloud-apps.md#office-365) instead.
## Does a Conditional Access policy apply to service accounts?
-Conditional Access policies apply to all user accounts. This includes user accounts that are used as service accounts. Often, a service account that runs unattended can't satisfy the requirements of a Conditional Access policy. For example, multi-factor authentication might be required. Service accounts can be excluded from a policy by using a [user or group exclusion](concept-conditional-access-users-groups.md#exclude-users).
+Conditional Access policies apply to all user accounts. This includes user accounts that are used as service accounts. Often, a service account that runs unattended can't satisfy the requirements of a Conditional Access policy. For example, multifactor authentication might be required. Service accounts can be excluded from a policy by using a [user or group exclusion](concept-conditional-access-users-groups.md#exclude-users).
## What is the default exclusion policy for unsupported device platforms?
-Currently, Conditional Access policies are selectively enforced on users of iOS and Android devices. Applications on other device platforms are, by default, not affected by the Conditional Access policy for iOS and Android devices. A tenant admin can choose to override the global policy to disallow access to users on platforms that are not supported.
+Currently, Conditional Access policies are selectively enforced on users of iOS and Android devices. Applications on other device platforms are, by default, not affected by the Conditional Access policy for iOS and Android devices. A tenant admin can choose to override the global policy to disallow access to users on platforms that aren't supported.
## How do Conditional Access policies work for Microsoft Teams?
For more information, see the article, [Conditional Access service dependencies]
After enabling some Conditional Access policies on the tenant in Microsoft Teams, certain tabs may no longer function in the desktop client as expected. However, the affected tabs function when using the Microsoft Teams web client. The tabs affected may include Power BI, Forms, VSTS, Power Apps, and SharePoint List.
-To see the affected tabs you must use the Teams web client in Edge, Internet Explorer, or Chrome with the Windows 10 Accounts extension installed. Some tabs depend on web authentication, which doesn't work in the Microsoft Teams desktop client when Conditional Access is enabled. Microsoft is working with partners to enable these scenarios. To date, we have enabled scenarios involving Planner, OneNote, and Stream.
+To see the affected tabs you must use the Teams web client in Microsoft Edge, Internet Explorer, or Chrome with the Windows 10 Accounts extension installed. Some tabs depend on web authentication, which doesn't work in the Microsoft Teams desktop client when Conditional Access is enabled. Microsoft is working with partners to enable these scenarios. To date, we have enabled scenarios involving Planner, OneNote, and Stream.
## Next steps
active-directory Howto Conditional Access Policy Admin Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
Title: Conditional Access - Require MFA for administrators - Azure Active Directory
-description: Create a custom Conditional Access policy to require administrators to perform multi-factor authentication
+description: Create a custom Conditional Access policy to require administrators to perform multifactor authentication
Previously updated : 11/05/2021 Last updated : 08/22/2022
# Conditional Access: Require MFA for administrators
-Accounts that are assigned administrative rights are targeted by attackers. Requiring multi-factor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.
+Accounts that are assigned administrative rights are targeted by attackers. Requiring multifactor authentication (MFA) on those accounts is an easy way to reduce the risk of those accounts being compromised.
Microsoft recommends you require MFA on the following roles at a minimum, based on [identity score recommendations](../fundamentals/identity-secure-score.md):
Conditional Access policies are powerful tools, we recommend excluding the follo
- **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant to take steps to recover access. - More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).-- **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that are not tied to any particular user. They are normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals are not blocked by Conditional Access.
+- **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
- If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy. ## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multi-factor authentication.
+The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication.
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **Directory roles** and choose built-in roles like: - Global administrator - Application administrator
The following steps will help create a Conditional Access policy to require thos
> Conditional Access policies support built-in roles. Conditional Access policies are not enforced for other role types including [administrative unit-scoped](../roles/admin-units-assign-roles.md) or [custom roles](../roles/custom-create.md). 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**, and select **Done**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
active-directory Howto Conditional Access Policy All Users Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md
Title: Conditional Access - Require MFA for all users - Azure Active Directory
-description: Create a custom Conditional Access policy to require all users do multi-factor authentication
+description: Create a custom Conditional Access policy to require all users do multifactor authentication
Previously updated : 03/28/2022 Last updated : 08/22/2022
Organizations may have many cloud applications in use. Not all of those applicat
### Subscription activation
-Organizations that use the [Subscription Activation](/windows/deployment/windows-10-subscription-activation) feature to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their all users all cloud apps MFA policy.
+Organizations that use [Subscription Activation](/windows/deployment/windows-10-subscription-activation) to enable users to ΓÇ£step-upΓÇ¥ from one version of Windows to another, may want to exclude the Universal Store Service APIs and Web Application, AppID 45a330b1-b1ec-4cc1-9161-9f03992aa49f from their all users all cloud apps MFA policy.
## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require all users do multi-factor authentication.
+The following steps will help create a Conditional Access policy to require all users do multifactor authentication.
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users** 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
- 1. Under **Exclude**, select any applications that don't require multi-factor authentication.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+ 1. Under **Exclude**, select any applications that don't require multifactor authentication.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
After confirming your settings using [report-only mode](howto-conditional-access
Organizations may choose to incorporate known network locations known as **Named locations** to their Conditional Access policies. These named locations may include trusted IPv4 networks like those for a main office location. For more information about configuring named locations, see the article [What is the location condition in Azure Active Directory Conditional Access?](location-condition.md)
-In the example policy above, an organization may choose to not require multi-factor authentication if accessing a cloud app from their corporate network. In this case they could add the following configuration to the policy:
+In the example policy above, an organization may choose to not require multifactor authentication if accessing a cloud app from their corporate network. In this case they could add the following configuration to the policy:
1. Under **Assignments**, select **Conditions** > **Locations**. 1. Configure **Yes**.
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
Title: Conditional Access - Require MFA for Azure management - Azure Active Directory
-description: Create a custom Conditional Access policy to require multi-factor authentication for Azure management tasks
+description: Create a custom Conditional Access policy to require multifactor authentication for Azure management tasks
Previously updated : 02/03/2022 Last updated : 08/22/2022
Organizations use many Azure services and manage them from Azure Resource Manage
* Azure PowerShell * Azure CLI
-These tools can provide highly privileged access to resources, that can alter subscription-wide configurations, service settings, and subscription billing. To protect these privileged resources, Microsoft recommends requiring multi-factor authentication for any user accessing these resources. In Azure AD, these tools are grouped together in a suite called [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management). For Azure Government, this suite should be the Azure Government Cloud Management API app.
+These tools can provide highly privileged access to resources that can make the following changes:
+
+- Alter subscription-wide configurations
+- Service settings
+- Subscription billing
+
+To protect these privileged resources, Microsoft recommends requiring multifactor authentication for any user accessing these resources. In Azure AD, these tools are grouped together in a suite called [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management). For Azure Government, this suite should be the Azure Government Cloud Management API app.
## User exclusions
Conditional Access policies are powerful tools, we recommend excluding the follo
* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access. * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals are not blocked by Conditional Access.
+* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
* If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy. ## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to require users who access the [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management) suite do multi-factor authentication.
+The following steps will help create a Conditional Access policy to require users who access the [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management) suite do multifactor authentication.
> [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
The following steps will help create a Conditional Access policy to require user
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Microsoft Azure Management**, and select **Select** then **Done**.
-1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Microsoft Azure Management**, and select **Select**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
active-directory Howto Conditional Access Policy Block Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md
Title: Conditional Access - Block access - Azure Active Directory
-description: Create a custom Conditional Access policy to
+description: Create a custom Conditional Access policy to Block access
Previously updated : 02/14/2022 Last updated : 08/22/2022
Conditional Access policies are powerful tools, we recommend excluding the follo
* **Emergency access** or **break-glass** accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access. * More information can be found in the article, [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md).
-* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that are not tied to any particular user. They are normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals are not blocked by Conditional Access.
+* **Service accounts** and **service principals**, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that aren't tied to any particular user. They're normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals aren't blocked by Conditional Access.
* If your organization has these accounts in use in scripts or code, consider replacing them with [managed identities](../managed-identities-azure-resources/overview.md). As a temporary workaround, you can exclude these specific accounts from the baseline policy. ## Create a Conditional Access policy
-The following steps will help create Conditional Access policies to block access to all apps except for [Office 365](concept-conditional-access-cloud-apps.md#office-365) if users are not on a trusted network. These policies are put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they will have on existing users. When administrators are comfortable that the policies apply as they intend, they can switch them to **On**.
+The following steps will help create Conditional Access policies to block access to all apps except for [Office 365](concept-conditional-access-cloud-apps.md#office-365) if users aren't on a trusted network. These policies are put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policies apply as they intend, they can switch them to **On**.
The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location.
The first policy blocks access to all apps except for Microsoft 365 applications
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions**, select the following options: 1. Under **Include**, select **All cloud apps**.
- 1. Under **Exclude**, select **Office 365**, select **Select**, then select **Done**.
+ 1. Under **Exclude**, select **Office 365**, select **Select**.
1. Under **Conditions**: 1. Under **Conditions** > **Location**. 1. Set **Configure** to **Yes** 1. Under **Include**, select **Any location**. 1. Under **Exclude**, select **All trusted locations**.
- 1. Select **Done**.
- 1. Under **Client apps (Preview)**, set **Configure** to **Yes**, and select **Done**, then **Done**.
+ 1. Under **Client apps**, set **Configure** to **Yes**, and select **Done**.
1. Under **Access controls** > **Grant**, select **Block access**, then select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy. After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
-A second policy is created below to require multi-factor authentication or a compliant device for users of Microsoft 365.
+A second policy is created below to require multifactor authentication or a compliant device for users of Microsoft 365.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Office 365**, and select **Select**, then **Done**.
+1. Under **Cloud apps or actions** > **Include**, select **Select apps**, choose **Office 365**, and select **Select**.
1. Under **Access controls** > **Grant**, select **Grant access**.
- 1. Select **Require multi-factor authentication** and **Require device to be marked as compliant** select **Select**.
+ 1. Select **Require multifactor authentication** and **Require device to be marked as compliant** select **Select**.
1. Ensure **Require one of the selected controls** is selected. 1. Select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**.
active-directory Howto Conditional Access Policy Block Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md
Previously updated : 11/05/2021 Last updated : 08/22/2022
Organizations can choose to deploy this policy using the steps outlined below or
## Create a Conditional Access policy
-The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they will have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
+The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**.
- 1. Under **Exclude**, select **Users and groups** and choose any accounts that must maintain the ability to use legacy authentication. Exclude at least one account to prevent yourself from being locked out. If you do not exclude any account, you will not be able to create this policy.
- 1. Select **Done**.
+ 1. Under **Exclude**, select **Users and groups** and choose any accounts that must maintain the ability to use legacy authentication. Exclude at least one account to prevent yourself from being locked out. If you don't exclude any account, you won't be able to create this policy.
1. Under **Cloud apps or actions**, select **All cloud apps**.
- 1. Select **Done**.
1. Under **Conditions** > **Client apps**, set **Configure** to **Yes**. 1. Check only the boxes **Exchange ActiveSync clients** and **Other clients**. 1. Select **Done**.
active-directory Howto Conditional Access Policy Compliant Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md
Previously updated : 03/28/2022 Last updated : 08/22/2022
The following steps will help create a Conditional Access policy to require devi
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. If you must exclude specific applications from your policy, you can choose them from the **Exclude** tab under **Select excluded cloud apps** and choose **Select**.
- 1. Select **Done**.
1. Under **Access controls** > **Grant**. 1. Select **Require device to be marked as compliant** and **Require Hybrid Azure AD joined device** 1. **For multiple controls** select **Require one of the selected controls**.
active-directory Howto Conditional Access Policy Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md
Previously updated : 11/05/2021 Last updated : 08/22/2022
# Conditional Access: Block access by location
-With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. The location condition is commonly used to block access from countries/regions where your organization knows traffic should not come from.
+With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. The location condition is commonly used to block access from countries/regions where your organization knows traffic shouldn't come from.
> [!NOTE] > Conditional Access policies are enforced after first-factor authentication is completed. Conditional Access isn't intended to be an organization's first line of defense for scenarios like denial-of-service (DoS) attacks, but it can use signals from these events to determine access.
With the location condition in Conditional Access, you can control access to you
1. Choose **New location**. 1. Give your location a name. 1. Choose **IP ranges** if you know the specific externally accessible IPv4 address ranges that make up that location or **Countries/Regions**.
- 1. Provide the **IP ranges** or select the **Countries/Regions** for the location you are specifying.
+ 1. Provide the **IP ranges** or select the **Countries/Regions** for the location you're specifying.
* If you choose Countries/Regions, you can optionally choose to include unknown areas. 1. Choose **Save**
More information about the location condition in Conditional Access can be found
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, and select **All cloud apps**. 1. Under **Conditions** > **Location**. 1. Set **Configure** to **Yes**
active-directory Howto Conditional Access Policy Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md
Previously updated : 11/15/2021 Last updated : 08/22/2022
# Conditional Access: Securing security info registration
-Securing when and how users register for Azure AD Multi-Factor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience. Users signing in to the Microsoft Authenticator app or enabling passwordless phone sign-in are subject to this policy.
+Securing when and how users register for Azure AD multifactor Authentication and self-service password reset is possible with user actions in a Conditional Access policy. This feature is available to organizations who have enabled the [combined registration](../authentication/concept-registration-mfa-sspr-combined.md). This functionality allows organizations to treat the registration process like any application in a Conditional Access policy and use the full power of Conditional Access to secure the experience. Users signing in to the Microsoft Authenticator app or enabling passwordless phone sign-in are subject to this policy.
-Some organizations in the past may have used trusted network location or device compliance as a means to secure the registration experience. With the addition of [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) in Azure AD, administrators can provide time-limited credentials to their users that allow them to register from any device or location. Temporary Access Pass credentials satisfy Conditional Access requirements for multi-factor authentication.
+Some organizations in the past may have used trusted network location or device compliance as a means to secure the registration experience. With the addition of [Temporary Access Pass](../authentication/howto-authentication-temporary-access-pass.md) in Azure AD, administrators can provide time-limited credentials to their users that allow them to register from any device or location. Temporary Access Pass credentials satisfy Conditional Access requirements for multifactor authentication.
## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
## Create a policy to secure registration
-The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multi-factor authentication or use Temporary Access Pass credentials.
+The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multifactor authentication or use Temporary Access Pass credentials.
1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. > [!WARNING]
The following policy applies to the selected users, who attempt to register usin
1. Include **Any location**. 1. Exclude **All trusted locations**. 1. Under **Access controls** > **Grant**.
- 1. Select **Grant access**, **Require multi-factor authentication**.
+ 1. Select **Grant access**, **Require multifactor authentication**.
1. Select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy. After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
-Administrators will now have to issue Temporary Access Pass credentials to new users so they can satisfy the requirements for multi-factor authentication to register. Steps to accomplish this task, are found in the section [Create a Temporary Access Pass in the Azure AD Portal](../authentication/howto-authentication-temporary-access-pass.md#create-a-temporary-access-pass).
+Administrators will now have to issue Temporary Access Pass credentials to new users so they can satisfy the requirements for multifactor authentication to register. Steps to accomplish this task, are found in the section [Create a Temporary Access Pass in the Azure AD Portal](../authentication/howto-authentication-temporary-access-pass.md#create-a-temporary-access-pass).
-Organizations may choose to require other grant controls with or in place of **Require multi-factor authentication** at step 6b. When selecting multiple controls, be sure to select the appropriate radio button toggle to require **all** or **one** of the selected controls when making this change.
+Organizations may choose to require other grant controls with or in place of **Require multifactor authentication** at step 6b. When selecting multiple controls, be sure to select the appropriate radio button toggle to require **all** or **one** of the selected controls when making this change.
### Guest user registration
-For [guest users](../external-identities/what-is-b2b.md) who need to register for multi-factor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
+For [guest users](../external-identities/what-is-b2b.md) who need to register for multifactor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide.
1. In the **Azure portal**, browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All guest and external users**. 1. Under **Cloud apps or actions**, select **User actions**, check **Register security information**. 1. Under **Conditions** > **Locations**.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Previously updated : 08/16/2022 Last updated : 08/22/2022
Organizations can choose to deploy this policy using the steps outlined below or
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**.
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Previously updated : 08/16/2022 Last updated : 08/22/2022
# Conditional Access: Sign-in risk-based Conditional Access
-Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multi-factor authentication to prove that they are really who they say they are.
+Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multifactor authentication to prove that they're really who they say they are.
A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../identity-protection/concept-identity-protection-risks.md#sign-in-risk). There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
-The Sign-in risk-based policy protects users from registering MFA in risky sessions. For example. If the users are not registered for MFA, their risky sign-ins will get blocked and presented with the AADSTS53004 error.
+The Sign-in risk-based policy protects users from registering MFA in risky sessions. If users aren't registered for MFA, their risky sign-ins will get blocked, and they see an AADSTS53004 error.
## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
- 1. Select **Done**.
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**. 1. Select **High** and **Medium**. 1. Select **Done**. 1. Under **Access controls** > **Grant**.
- 1. Select **Grant access**, **Require multi-factor authentication**.
+ 1. Select **Grant access**, **Require multifactor authentication**.
1. Select **Select**. 1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Previously updated : 07/06/2022 Last updated : 08/22/2022
On Azure AD registered Windows devices, sign in to the device is considered a pr
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). 1. Select **Done**.
After administrators confirm your settings using [report-only mode](howto-condit
### Validation
-Use the What-If tool to simulate a sign in from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
+Use the What-If tool to simulate a sign-in from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
![Conditional Access What If tool results](media/howto-conditional-access-session-lifetime/conditional-access-what-if-tool-result.png)
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
Previously updated : 11/08/2021 Last updated : 08/22/2022
Organizations can choose to deploy this policy using the steps outlined below or
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
- 1. Select **Done**.
1. Under **Cloud apps or actions**, select **All cloud apps**. 1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**.
This policy will block all Exchange ActiveSync clients using basic authenticatio
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy. 1. Select **Done**.
active-directory Policy Migration Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md
Title: Migrate Conditional Access policies with multi-factor authentication - Azure Active Directory
-description: This article shows how to migrate a classic policy that requires multi-factor authentication in the Azure portal.
+ Title: Migrate a classic Conditional Access policy - Azure Active Directory
+description: This article shows how to migrate a classic Conditional Access policy in the Azure portal.
Previously updated : 05/26/2020 Last updated : 08/22/2022
# Migrate a classic policy in the Azure portal
-This article shows how to migrate a classic policy that requires **multi-factor authentication** for a cloud app. Although it is not a prerequisite, we recommend that you read [Migrate classic policies in the Azure portal](policy-migration.md) before you start migrating your classic policies.
+This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies in the Azure portal](policy-migration.md) before you start migrating your classic policies.
![Classic policy details requiring MFA for Salesforce app](./media/policy-migration/33.png)
The migration process consists of the following steps:
1. In the list of classic policies, select the policy you wish to migrate. Document the configuration settings so that you can re-create with a new Conditional Access policy.
-## Create a new Conditional Access policy
-
-1. In the [Azure portal](https://portal.azure.com), navigate to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. To create a new Conditional Access policy, select **New policy**.
-1. On the **New** page, in the **Name** textbox, type a name for your policy.
-1. In the **Assignments** section, click **Users and groups**.
- 1. If you have all users selected in your classic policy, click **All users**.
- 1. If you have groups selected in your classic policy, click **Select users and groups**, and then select the required users and groups.
- 1. If you have the excluded groups, click the **Exclude** tab, and then select the required users and groups.
- 1. Select **Done**
-1. In the **Assignment** section, click **Cloud apps or actions**.
-1. On the **Cloud apps or actions** page, perform the following steps:
- 1. Click **Select apps**.
- 1. Click **Select**.
- 1. On the **Select** page, select your cloud app, and then click **Select**.
- 1. On the **Cloud apps** page, click **Done**.
-1. If you have **Require multi-factor authentication** selected:
- 1. In the **Access controls** section, click **Grant**.
- 1. On the **Grant** page, click **Grant access**, and then click **Require multi-factor authentication**.
- 1. Click **Select**.
-1. Click **On** to enable your policy then select **Save**.
-
- ![Conditional Access policy creation](./media/policy-migration-mfa/conditional-access-policy-migration.png)
+For examples of common policies and their configuration in the Azure portal, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md).
## Disable the classic policy
-To disable your classic policy, click **Disable** in the **Details** view.
+To disable your classic policy, select **Disable** in the **Details** view.
![Disable classic policies](./media/policy-migration-mfa/14.png)
active-directory Policy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration.md
Previously updated : 12/04/2019 Last updated : 08/22/2022
Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. Conditional Access is at the heart of the new identity driven control plane. While the purpose is still the same, the release of the new Azure portal has introduced significant improvements to how Conditional Access works.
-Consider migrating the policies you have not created in the Azure portal because:
+Consider migrating the policies you haven't created in the Azure portal because:
-- You can now address scenarios you could not handle before.
+- You can now address scenarios you couldn't handle before.
- You can reduce the number of policies you have to manage by consolidating them. - You can manage all your Conditional Access policies in one central location. - The Azure classic portal will be retired.
This article explains what you need to know to migrate your existing Conditional
## Classic policies
-In the [Azure portal](https://portal.azure.com), Conditional Access policies can be found under **Azure Active Directory** > **Security** > **Conditional Access**. Your organization might also have older Conditional Access policies not created using this page. These policies are known as *classic policies*. Classic policies are Conditional Access policies, you have created in:
+In the [Azure portal](https://portal.azure.com), Conditional Access policies can be found under **Azure Active Directory** > **Security** > **Conditional Access**. Your organization might also have older Conditional Access policies not created using this page. These policies are known as *classic policies*. Classic policies are Conditional Access policies, you've created in:
- The Azure classic portal - The Intune classic portal
This is, for example, the case if you want to support all client app types. In a
![Conditional Access selecting client apps](./media/policy-migration/64.png)
-A consolidation into one new policy is also not possible if your classic policies contain several conditions. A new policy that has **Exchange Active Sync** as client apps condition configured does not support other conditions:
+A consolidation into one new policy is also not possible if your classic policies contain several conditions. A new policy that has **Exchange Active Sync** as client apps condition configured doesn't support other conditions:
![Exchange ActiveSync does not support the selected conditions](./media/policy-migration/08.png)
-If you have a new policy that has **Exchange Active Sync** as client apps condition configured, you need to make sure that all other conditions are not configured.
+If you have a new policy that has **Exchange Active Sync** as client apps condition configured, you need to make sure that all other conditions aren't configured.
![Conditional Access conditions](./media/policy-migration/16.png)
App-based classic policies for Exchange Online that include **Exchange Active Sy
You can consolidate multiple classic policies that include **Exchange Active Sync** as client apps condition if they have: - Only **Exchange Active Sync** as condition -- Several requirements for granting access configured
+- Several requirements for granting access are configured
One common scenario is the consolidation of:
In a new policy, you need to select the [device platforms](concept-conditional-a
- [Use report-only mode for Conditional Access to determine the impact of new policy decisions.](concept-conditional-access-report-only.md) - If you want to know how to configure a Conditional Access policy, see [Conditional Access common policies](concept-conditional-access-policy-common.md).-- If you are ready to configure Conditional Access policies for your environment, see the article [How To: Plan your Conditional Access deployment in Azure Active Directory](plan-conditional-access.md).
+- If you're ready to configure Conditional Access policies for your environment, see the article [How To: Plan your Conditional Access deployment in Azure Active Directory](plan-conditional-access.md).
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
Title: Troubleshooting Conditional Access policy changes - Azure Active Directory
+ Title: Troubleshoot Conditional Access policy changes - Azure Active Directory
description: Diagnose changes to Conditional Access policy with the Azure AD audit logs. Previously updated : 08/09/2021 Last updated : 08/22/2022
Audit log data is only kept for 30 days by default, which may not be long enough
- Send data to a Log Analytics workspace - Archive data to a storage account-- Stream data to an Event Hub
+- Stream data to Event Hubs
- Send data to a partner solution Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
To create a policy that blocks access for external users to a set of application
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **All guests and external users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). 1. Select **Done**.
There may be times you want to block external users except a specific group. For
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **All guests and external users**. 1. Under **Exclude**, select **Users and groups**, 1. Choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
active-directory Concept Fundamentals Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-block-legacy-authentication.md
- Title: Blocking legacy authentication protocols in Azure AD
-description: Learn how and why organizations should block legacy authentication protocols
----- Previously updated : 01/26/2021---------
-# Blocking legacy authentication
-
-To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. Legacy authentication is a term that refers to an authentication request made by:
--- Older Office clients that do not use modern authentication (for example, Office 2010 client)-- Any client that uses legacy mail protocols such as IMAP/SMTP/POP3-
-Today, the majority of all compromising sign-in attempts come from legacy authentication. Legacy authentication does not support multi-factor authentication (MFA). Even if you have an MFA policy enabled on your directory, a bad actor can authenticate using a legacy protocol and bypass MFA. The best way to protect your account from malicious authentication requests made by legacy protocols is to block these attempts altogether.
-
-## Identify legacy authentication use
-
-Before you can block legacy authentication in your directory, you need to first understand if your users have apps that use legacy authentication and how it affects your overall directory. Azure AD sign-in logs can be used to understand if you're using legacy authentication.
-
-1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**.
-1. Add the **Client App** column if it is not shown by clicking onΓÇ»**Columns**ΓÇ»>ΓÇ»**Client App**.
-1. Filter by **Client App** > check all the **Legacy Authentication Clients** options presented.
-1. Filter by **Status** > **Success**.
-1. Expand your date range if necessary using the **Date** filter.
-1. If you have activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab.
-
-Filtering will only show you successful sign-in attempts that were made by the selected legacy authentication protocols. Clicking on each individual sign-in attempt will show you additional details. The Client App column or the Client App field under the Basic Info tab after selecting an individual row of data will indicate which legacy authentication protocol was used.
-These logs will indicate which users are still depending on legacy authentication and which applications are using legacy protocols to make authentication requests. For users that do not appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy or enable the Baseline policy: block legacy authentication for these users only.
-
-## Moving away from legacy authentication
-
-Once you have a better idea of who is using legacy authentication in your directory and which applications depend on it, the next step is upgrading your users to use modern authentication. Modern authentication is a method of identity management that offers more secure user authentication and authorization. If you have an MFA policy in place on your directory, modern authentication ensures that the user is prompted for MFA when required. It is the more secure alternative to legacy authentication protocols.
-
-This section gives a step-by-step overview on how to update your environment to modern authentication. Read through the steps below before enabling a legacy authentication blocking policy in your organization.
-
-### Step 1: Enable modern authentication in your directory
-
-The first step in enabling modern authentication is making sure your directory supports modern authentication. Modern authentication is enabled by default for directories created on or after August 1, 2017. If your directory was created prior to this date, you'll need to manually enable modern authentication for your directory using the following steps:
-
-1. Check to see if your directory already supports modern authentication by running `Get-CsOAuthConfiguration` from the [Skype for Business Online PowerShell module](/office365/enterprise/powershell/manage-skype-for-business-online-with-office-365-powershell).
-1. If your command returns an empty `OAuthServers` property, then Modern Authentication is disabled. Update the setting to enable modern authentication using `Set-CsOAuthConfiguration`. If your `OAuthServers` property contains an entry, you're good to go.
-
-Be sure to complete this step before moving forward. It's critical that your directory configurations are changed first because they dictate which protocol will be used by all Office clients. Even if you're using Office clients that support modern authentication, they will default to using legacy protocols if modern authentication is disabled on your directory.
-
-### Step 2: Office applications
-
-Once you have enabled modern authentication in your directory, you can start updating applications by enabling modern authentication for Office clients. Office 2016 or later clients support modern authentication by default. No extra steps are required.
-
-If you are using Office 2013 Windows clients or older, we recommend upgrading to Office 2016 or later. Even after completing the prior step of enabling modern authentication in your directory, the older Office applications will continue to use legacy authentication protocols. If you are using Office 2013 clients and are unable to immediately upgrade to Office 2016 or later, follow the steps in the following article to [Enable Modern Authentication for Office 2013 on Windows devices](/office365/admin/security-and-compliance/enable-modern-authentication). To help protect your account while you're using legacy authentication, we recommend using strong passwords across your directory. Check out [Azure AD password protection](../authentication/concept-password-ban-bad.md) to ban weak passwords across your directory.
-
-Office 2010 does not support modern authentication. You will need to upgrade any users with Office 2010 to a more recent version of Office. We recommend upgrading to Office 2016 or later, as it blocks legacy authentication by default.
-
-If you are using macOS, we recommend upgrading to Office for Mac 2016 or later. If you are using the native mail client, you will need to have macOS version 10.14 or later on all devices.
-
-### Step 3: Exchange and SharePoint
-
-For Windows-based Outlook clients to use modern authentication, Exchange Online must be modern authentication enabled as well. If modern authentication is disabled for Exchange Online, Windows-based Outlook clients that support modern authentication (Outlook 2013 or later) will use basic authentication to connect to Exchange Online mailboxes.
-
-SharePoint Online is enabled for modern authentication default. For directories created after August 1, 2017, modern authentication is enabled by default in Exchange Online. However, if you had previously disabled modern authentication or are you using a directory created prior to this date, follow the steps in the following article toΓÇ»[Enable modern authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/enable-or-disable-modern-authentication-in-exchange-online).
-
-### Step 4: Skype for Business
-
-To prevent legacy authentication requests made by Skype for Business, it is necessary to enable modern authentication for Skype for Business Online. For directories created after August 1, 2017, modern authentication for Skype for Business is enabled by default.
-
-We suggest you transition to Microsoft Teams, which supports modern authentication by default. However, if you are unable to migrate at this time, you will need to enable modern authentication for Skype for Business Online so that Skype for Business clients start using modern authentication. Follow the steps in this articleΓÇ»[Skype for Business topologies supported with Modern Authentication](/skypeforbusiness/plan-your-deployment/modern-authentication/topologies-supported), to enable Modern Authentication for Skype for Business.
-
-In addition to enabling modern authentication for Skype for Business Online, we recommend enabling modern authentication for Exchange Online when enabling modern authentication for Skype for Business. This process will help synchronize the state of modern authentication in Exchange Online and Skype for Business online and will prevent multiple sign-in prompts for Skype for Business clients.
-
-### Step 5: Using mobile devices
-
-Applications on your mobile device need to block legacy authentication as well. We recommend using Outlook for Mobile. Outlook for Mobile supports modern authentication by default and will satisfy other MFA baseline protection policies.
-
-In order to use the native iOS mail client, you will need to be running iOS version 11.0 or later to ensure the mail client has been updated to block legacy authentication.
-
-### Step 6: On-premises clients
-
-If you are a hybrid customer using Exchange Server on-premises and Skype for Business on-premises, both services will need to be updated to enable modern authentication. When using modern authentication in a hybrid environment, you're still authenticating users on-premises. The story of authorizing their access to resources (files or emails) changes.
-
-Before you can begin enabling modern authentication on-premises, please be sure that you have met the pre-requisites. You're now ready to enable modern authentication on-premises.
-
-Steps for enabling modern authentication can be found in the following articles:
-
-* [How to configure Exchange Server on-premises to use Hybrid Modern Authentication](/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication)
-* [How to use Modern Authentication with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)
-
-## Next steps
--- [How to configure Exchange Server on-premises to use Hybrid Modern Authentication](/office365/enterprise/configure-exchange-server-for-hybrid-modern-authentication)-- [How to use Modern Authentication with Skype for Business](/skypeforbusiness/manage/authentication/use-adal)-- [Block legacy authentication](../conditional-access/block-legacy-authentication.md)
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Today, most compromising sign-in attempts come from legacy authentication. Legac
After security defaults are enabled in your tenant, all authentication requests made by an older protocol will be blocked. Security defaults blocks Exchange Active Sync basic authentication. > [!WARNING]
-> Before you enable security defaults, make sure your administrators aren't using older authentication protocols. For more information, see [How to move away from legacy authentication](concept-fundamentals-block-legacy-authentication.md).
+> Before you enable security defaults, make sure your administrators aren't using older authentication protocols. For more information, see [How to move away from legacy authentication](../conditional-access/block-legacy-authentication.md).
- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
# Review access to groups and applications in Azure AD access reviews
-Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft Online Services with a feature called Azure AD access reviews. This article will go over how a designated reviewer performs an access review for members of a group or users with access to an application. If you would like to review access to an access package read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md)
+Azure Active Directory (Azure AD) simplifies how enterprises manage access to groups and applications in Azure AD and other Microsoft web services with a feature called Azure AD access reviews. This article will cover how a designated reviewer performs an access review for members of a group or users with access to an application. If you want to review access to an access package, read [Review access of an access package in Azure AD entitlement management](entitlement-management-access-reviews-review-access.md).
-## Perform access review using My Access
-You can review access to groups and applications via My Access, an end-user friendly portal for granting, approving, and reviewing access needs.
+## Perform access review by using My Access
+You can review access to groups and applications via My Access. My Access is a user-friendly portal for granting, approving, and reviewing access needs.
-### Use email to navigate to My Access
+### Use email to go to My Access
>[!IMPORTANT]
-> There could be delays in receiving email and it some cases it could take up to 24 hours. Add azure-noreply@microsoft.com to your safe recipients list to make sure that you are receiving all emails.
+> There could be delays in receiving email. In some cases, it could take up to 24 hours. Add azure-noreply@microsoft.com to your safe recipients list to make sure that you're receiving all emails.
-1. Look for an email from Microsoft asking you to review access. You can see an example email message below:
+1. Look for an email from Microsoft asking you to review access. Here's an example email message:
- ![Example email from Microsoft to review access to a group](./media/perform-access-review/access-review-email-preview.png)
+ ![Screenshot of example email from Microsoft to review access to a group.](./media/perform-access-review/access-review-email-preview.png)
-1. Click the **Start review** link to open the access review.git pu
+1. Select the **Start review** link to open the access review.
-### Navigate directly to My Access
+### Go directly to My Access
You can also view your pending access reviews by using your browser to open My Access.
-1. Sign in to the My Access at https://myaccess.microsoft.com/
+1. Sign in to My Access at https://myaccess.microsoft.com/.
-2. Select **Access reviews** from the menu on the left side bar to see a list of pending access reviews assigned to you.
+2. Select **Access reviews** from the left menu to see a list of pending access reviews assigned to you.
## Review access for one or more users
-After you open My Access under Groups and Apps you can see:
+After you open My Access under **Groups and Apps**, you can see:
-- **Name** The name of the access review.-- **Due** The due date for the review. After this date denied users could be removed from the group or app being reviewed.-- **Resource** The name of the resource under review.-- **Progress** The number of users reviewed over the total number of users part of this access review.
+- **Name**: The name of the access review.
+- **Due**: The due date for the review. After this date, denied users could be removed from the group or app being reviewed.
+- **Resource**: The name of the resource under review.
+- **Progress**: The number of users reviewed over the total number of users part of this access review.
-Click on the name of an access review to get started.
+Select the name of an access review to get started.
-![Pending access reviews list for apps and groups](./media/perform-access-review/access-reviews-list-preview.png)
+![Screenshot of pending access reviews list for apps and groups.](./media/perform-access-review/access-reviews-list-preview.png)
-Once that it opens, you will see the list of users in scope for the access review.
+After it opens, you'll see the list of users in scope for the access review.
-> [!NOTE]
+> [!NOTE]
> If the request is to review your own access, the page will look different. For more information, see [Review access for yourself to groups or applications](review-your-access.md). There are two ways that you can approve or deny access:
There are two ways that you can approve or deny access:
1. Review the list of users and decide whether to approve or deny their continued access.
-1. Select one or more users by clicking the circle next to their names.
+1. Select one or more users by selecting the circle next to their names.
+
+1. Select **Approve** or **Deny** on the bar.
+
+ If you're unsure if a user should continue to have access, you can select **Don't know**. The user gets to keep their access, and your choice is recorded in the audit logs. Keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
-1. Select **Approve** or **Deny** on the bar above.
- - If you are unsure if a user should continue to have access or not, you can click **Don't know**. The user gets to keep their access and your choice is recorded in the audit logs. It is important that you keep in mind that any information you provide will be available to other reviewers. They can read your comments and take them into account when they review the request.
+ ![Screenshot of open access review listing the users who need review.](./media/perform-access-review/user-list-preview.png)
- ![Open access review listing the users who need review](./media/perform-access-review/user-list-preview.png)
+1. The administrator of the access review might require you to supply a reason for your decision in the **Reason** box, even when a reason is not required. You can still provide a reason for your decision. The information that you include will be available to other approvers for review.
-1. The administrator of the access review may require that you supply a reason in the **Reason** box for your decision. Even when a reason is not required. You can still provide a reason for your decision and the information that you include will be available to other approvers for review.
+1. Select **Submit**.
-1. Click **Submit**.
- - You can change your response at any time until the access review has ended. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user.
+ You can change your response at any time until the access review has ended. If you want to change your response, select the row and update the response. For example, you can approve a previously denied user or deny a previously approved user.
> [!IMPORTANT]
- > - If a user is denied access, they aren't removed immediately. They are removed when the review period has ended or when an administrator stops the review.
- > - If there are multiple reviewers, the last submitted response is recorded. Consider an example where an administrator designates two reviewers ΓÇô Alice and Bob. Alice opens the access review first and approves a user's access request. Before the review period ends, Bob opens the access review and denies access on the same request previously approved by Alice. The last decision denying the access is the response that gets recorded.
+ > - If a user is denied access, they aren't removed immediately. The user is removed when the review period has ended or when an administrator stops the review.
+ > - If there are multiple reviewers, the last submitted response is recorded. Consider an example where an administrator designates two reviewers: Alice and Bob. Alice opens the access review first and approves a user's access request. Before the review period ends, Bob opens the access review and denies access on the same request previously approved by Alice. The last decision denying the access is the response that gets recorded.
### Review access based on recommendations
-To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single click. There are two ways recommendations are generated for the reviewer. One method the system uses to create recommendations is by the user's sign-in activity. If a user has been inactive for 30 days or more, the reviewer will be recommended to deny access. The other method is based on the access the user's peers have. If the user doesn't have the same access as their peers, the reviewer will be recommended to deny that user access.
+To make access reviews easier and faster for you, we also provide recommendations that you can accept with a single selection. There are two ways that the system generates recommendations for the reviewer. One method is by the user's sign-in activity. If a user has been inactive for 30 days or more, the system will recommend that the reviewer deny access.
-If you have **No sign-in within 30 days** or **Peer outlier** enabled, follow the steps below to accept recommendations:
+The other method is based on the access that the user's peers have. If the user doesn't have the same access as their peers, the system will recommend that the reviewer deny that user access.
-1. Select one or more users and then Click **Accept recommendations**.
+If you have **No sign-in within 30 days** or **Peer outlier** enabled, follow these steps to accept recommendations:
- ![Open access review listing showing the Accept recommendations button](./media/perform-access-review/accept-recommendations-preview.png)
+1. Select one or more users, and then select **Accept recommendations**.
-1. Or to accept recommendations for all unreviewed users, make sure that no users are selected and click on the **Accept recommendations** button on the top bar.
+ ![Screenshot of open access review listing that shows the Accept recommendations button.](./media/perform-access-review/accept-recommendations-preview.png)
-1. Click **Submit** to accept the recommendations.
+ Or to accept recommendations for all unreviewed users, make sure that no users are selected and then select the **Accept recommendations** button on the top bar.
+1. Select **Submit** to accept the recommendations.
> [!NOTE]
-> When you accept recommendations previous decisions will not be changed.
+> When you accept recommendations, previous decisions won't be changed.
### Review access for one or more users in a multi-stage access review (preview)
-If multi-stage access reviews have been enabled by the administrator, there will be 2 or 3 total stages of review. Each stage of review will have a specified reviewer.
+If the administrator has enabled multi-stage access reviews, there will be two or three total stages of review. Each stage of review will have a specified reviewer.
-You will review access either manually or accept the recommendations based on sign-in activity for the stage you are assigned as the reviewer.
+You will either review access manually or accept the recommendations based on sign-in activity for the stage you're assigned as the reviewer.
-If you are the 2nd stage or 3rd stage reviewer, you will also see the decisions made by the reviewers in the prior stage(s) if the administrator enabled this setting when creating the access review. The decision made by a 2nd or 3rd stage reviewer will overwrite the previous stage. So, the decision the 2nd stage reviewer makes will overwrite the first stage, and the 3rd stage reviewer's decision will overwrite the second stage.
+If you're the second-stage or third-stage reviewer, you'll also see the decisions made by the reviewers in the prior stages, if the administrator enabled this setting when creating the access review. The decision made by a second-stage or third-stage reviewer will overwrite the previous stage. So, the decision that the second-stage reviewer makes will overwrite the first stage. And the third-stage reviewer's decision will overwrite the second stage.
- ![Select user to show the multi-stage access review results](./media/perform-access-review/multi-stage-access-review.png)
+ ![Screenshot showing selection of a user to show the multi-stage access review results.](./media/perform-access-review/multi-stage-access-review.png)
Approve or deny access as outlined in [Review access for one or more users](#review-access-for-one-or-more-users). > [!NOTE]
-> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure AD portal. This will close the active stage and start the next stage.
+> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure AD portal. This action will close the active stage and start the next stage.
-### Review access for B2B direct connect users in Teams Shared Channels and Microsoft 365 groups (preview)
+### Review access for B2B direct connect users in Teams shared channels and Microsoft 365 groups (preview)
To review access of B2B direct connect users, use the following instructions:
-1. As the reviewer, you should receive an email that requests you to review access for the team or group. Click the link in the email, or navigate directly to https://myaccess.microsoft.com/.
+1. As the reviewer, you should receive an email that requests you to review access for the team or group. Select the link in the email, or go directly to https://myaccess.microsoft.com/.
-1. Follow the instructions in [Review access for one or more users](#review-access-for-one-or-more-users) to make decisions to approve or deny the users access to the Teams.
+1. Follow the instructions in [Review access for one or more users](#review-access-for-one-or-more-users) to make decisions to approve or deny the users access to the teams.
> [!NOTE]
-> Unlike internal users and B2B Collaboration users, B2B direct connect users and Teams **don't** have recommendations based on last sign-in activity to make decisions when you perform the review.
+> Unlike internal users and B2B collaboration users, B2B direct connect users and teams _don't_ have recommendations based on last sign-in activity to make decisions when you perform the review.
-If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. This includes B2B collaboration users and internal users. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
+If a team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. This includes B2B collaboration users and internal users. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
-## If no action is taken on access review
-When the access review is setup, the administrator has the option to use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
+## Set up what will happen if no action is taken on access review
+When the access review is set up, the administrator has the option to use advanced settings to determine what will happen if a reviewer doesn't respond to an access review request.
-The administrator can set up the review so that if reviewers do not respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This includes the loss of access to the group or application under review.
+The administrator can set up the review so that if reviewers don't respond at the end of the review period, all unreviewed users can have an automatic decision made on their access. This includes the loss of access to the group or application under review.
## Next steps
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
You can also configure group claims in the [optional claims](../../active-direct
By default, group `ObjectID` attributes will be emitted in the group claim value. To modify the claim value to contain on-premises group attributes, or to change the claim type to a role, use the `optionalClaims` configuration described in the next step.
-3. Set optional clams for group name configuration.
+3. Set optional claims for group name configuration.
If you want the groups in the token to contain the on-premises Active Directory group attributes, specify which token-type optional claim should be applied in the `optionalClaims` section. You can list multiple token types:
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Proactively communicate with your users how their experience will change, when i
### Plan the maintenance window
-After the domain conversion, Azure AD might continue to send some legacy authentication requests from Exchange Online to your AD FS servers for up to four hours. The delay is because the Exchange Online cache for [legacy applications authentication](../fundamentals/concept-fundamentals-block-legacy-authentication.md) can take up to 4 hours to be aware of the cutover from federation to cloud authentication.
+After the domain conversion, Azure AD might continue to send some legacy authentication requests from Exchange Online to your AD FS servers for up to four hours. The delay is because the Exchange Online cache for legacy applications authentication can take up to 4 hours to be aware of the cutover from federation to cloud authentication.
During this four-hour window, you may prompt users for credentials repeatedly when reauthenticating to applications that use legacy authentication. Although the user can still successfully authenticate against AD FS, Azure AD no longer accepts the user's issued token because that federation trust is now removed.
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Before organizations enable remediation policies, they may want to [investigate]
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
Before organizations enable remediation policies, they may want to [investigate]
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users and groups**.
+1. Under **Assignments**, select **Users or workload identities**..
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**.
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
It's easier for an administrator to manage access to the application by assignin
1. In the left menu of the tenant overview, select **Security**. 1. Select **Conditional Access**, select **+ New policy**, and then select **Create new policy**. 1. Enter a name for the policy, such as *MFA Pilot*.
-1. Under **Assignments**, select **Users and groups**
+1. Under **Assignments**, select **Users or workload identities**.
1. On the **Include** tab, choose **Select users and groups**, and then select **Users and groups**. 1. Browse for and select the *MFA-Test-Group* that you previously created, and then choose **Select**. 1. Don't select **Create** yet, you add MFA to the policy in the next section.
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
To use this feature, you need:
1. Select **Azure Active Directory** > **Audit logs**. 1. Select **Export Data Settings**.
-
+ 1. In the **Diagnostics settings** pane, do either of the following: * To change existing settings, select **Edit setting**. * To add new settings, select **Add diagnostics setting**.
To use this feature, you need:
1. Select the **Stream to an event hub** check box, and then select **Event Hub/Configure**.
- [ ![Export settings](./media/tutorial-azure-monitor-stream-logs-to-event-hub/diagnostic-setting-stream-to-event-hub.png) ](./media/tutorial-azure-monitor-stream-logs-to-event-hub/diagnostic-setting-stream-to-event-hub.png)
-
- 1. Select the Azure subscription and Event Hubs namespace that you want to route the logs to.
+ [ ![Export settings](./media/tutorial-azure-monitor-stream-logs-to-event-hub/diagnostic-setting-stream-to-event-hub.png) ](./media/tutorial-azure-monitor-stream-logs-to-event-hub/diagnostic-setting-stream-to-event-hub.png#lightbox)
+
+ 1. Select the Azure subscription and Event Hubs namespace that you want to route the logs to.
The subscription and Event Hubs namespace must both be associated with the Azure AD tenant that the logs stream from. You can also specify an event hub within the Event Hubs namespace to which logs should be sent. If no event hub is specified, an event hub is created in the namespace with the default name **insights-logs-audit**. 1. Select any combination of the following items:
To use this feature, you need:
1. After about 15 minutes, verify that events are displayed in your event hub. To do so, go to the event hub from the portal and verify that the **incoming messages** count is greater than zero.
- [ ![Audit logs](./media/tutorial-azure-monitor-stream-logs-to-event-hub/azure-monitor-event-hub-instance.png)](./media/tutorial-azure-monitor-stream-logs-to-event-hub/azure-monitor-event-hub-instance.png)
+ [ ![Audit logs](./media/tutorial-azure-monitor-stream-logs-to-event-hub/azure-monitor-event-hub-instance.png)](./media/tutorial-azure-monitor-stream-logs-to-event-hub/azure-monitor-event-hub-instance.png#lightbox)
## Access data from your event hub
active-directory 4Dx Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/4dx-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with 4DX'
+description: Learn how to configure single sign-on between Azure Active Directory and 4DX.
++++++++ Last updated : 08/09/2022++++
+# Tutorial: Azure AD SSO integration with 4DX
+
+In this tutorial, you'll learn how to integrate 4DX with Azure Active Directory (Azure AD). When you integrate 4DX with Azure AD, you can:
+
+* Control in Azure AD who has access to 4DX.
+* Enable your users to be automatically signed-in to 4DX with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* 4DX single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* 4DX supports **IDP** initiated SSO.
+
+## Add 4DX from the gallery
+
+To configure the integration of 4DX into Azure AD, you need to add 4DX from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **4DX** in the search box.
+1. Select **4DX** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for 4DX
+
+Configure and test Azure AD SSO with 4DX using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in 4DX.
+
+To configure and test Azure AD SSO with 4DX, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure 4DX SSO](#configure-4dx-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create 4DX test user](#create-4dx-test-user)** - to have a counterpart of B.Simon in 4DX that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **4DX** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. 4DX application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, 4DX application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | companykey | `<unique ID>` |
+
+ > [!Note]
+ > For this `<unique ID>` of a customer assertion, please reach out to [4DX support team](mailto:support@bahrcode.com).
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up 4DX** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy a configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to 4DX.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **4DX**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure 4DX SSO
+
+To configure single sign-on on **4DX** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [4DX support team](mailto:support@bahrcode.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create 4DX test user
+
+In this section, you create a user called Britta Simon in 4DX. Work with [4DX support team](mailto:support@bahrcode.com) to add the users in the 4DX platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the 4DX for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the 4DX tile in the My Apps, you should be automatically signed in to the 4DX for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure 4DX you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Adra By Trintech Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adra-by-trintech-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Adra by Trintech'
+description: Learn how to configure single sign-on between Azure Active Directory and Adra by Trintech.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with Adra by Trintech
+
+In this tutorial, you'll learn how to integrate Adra by Trintech with Azure Active Directory (Azure AD). When you integrate Adra by Trintech with Azure AD, you can:
+
+* Control in Azure AD who has access to Adra by Trintech.
+* Enable your users to be automatically signed-in to Adra by Trintech with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adra by Trintech single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Adra by Trintech supports **SP** and **IDP** initiated SSO.
+
+## Add Adra by Trintech from the gallery
+
+To configure the integration of Adra by Trintech into Azure AD, you need to add Adra by Trintech from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Adra by Trintech** in the search box.
+1. Select **Adra by Trintech** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Adra by Trintech
+
+Configure and test Azure AD SSO with Adra by Trintech using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adra by Trintech.
+
+To configure and test Azure AD SSO with Adra by Trintech, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Adra by Trintech SSO](#configure-adra-by-trintech-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Adra by Trintech test user](#create-adra-by-trintech-test-user)** - to have a counterpart of B.Simon in Adra by Trintech that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Adra by Trintech** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file** and wish to configure in **IDP** initiated mode, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Screenshot shows to upload metadata file.](common/upload-metadata.png "File")
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![Screenshot shows to choose metadata file.](common/browse-upload-metadata.png "Folder")
+
+ c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in **Basic SAML Configuration** section.
+
+ d. In the **Sign-on URL** text box, type the URL:
+ `https://login.adra.com`
+
+ e. In the **Relay state** text box, type the URL:
+ `https://setup.adra.com`
+
+ f. In the **Logout URL** text box, type the URL:
+ `https://login.adra.com/Saml/SLOServiceSP`
+
+ > [!Note]
+ > You will get the **Service Provider metadata file** from the **Configure Adra by Trintech SSO** section, which is explained later in the tutorial. If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adra by Trintech.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Adra by Trintech**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Adra by Trintech SSO
+
+1. Log in to your Adra by Trintech company site as an administrator.
+
+1. Go to **Engagement** > **Security** Tab > **Security Policy** > select **Use a federated identity provider** button.
+
+1. Download the **Service Provider metadata file** by clicking **here** in the Adra page and upload this metadata file in the Azure portal.
+
+ [ ![Screenshot that shows the Configuration Settings.](./media/adra-by-trintech-tutorial/settings.png "Configuration") ](./media/adra-by-trintech-tutorial/settings.png#lightbox)
+
+1. Click on the **Add a new federated identity provider** button and perform the following steps:
+
+ [ ![Screenshot that shows the Organization Algorithm.](./media/adra-by-trintech-tutorial/certificate.png "Organization") ](./media/adra-by-trintech-tutorial/certificate.png#lightbox)
+
+ a. Enter a valid **Name** and **Description** values in the textbox.
+
+ b. In the **Metadata URL** textbox, paste the **App Federation Metadata Url** which you've copied from the Azure portal and click on the **Test URL** button.
+
+ c. Click **Save** to save the SAML configuration..
+
+### Create Adra by Trintech test user
+
+In this section, you create a user called Britta Simon at Adra by Trintech. Work with [Adra by Trintech support team](mailto:support@adra.com) to add the users in the Adra by Trintech platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Adra by Trintech Sign-on URL where you can initiate the login flow.
+
+* Go to Adra by Trintech Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Adra by Trintech for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Adra by Trintech tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Adra by Trintech for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Adra by Trintech you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Lattice Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lattice-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Lattice'
+description: Learn how to configure single sign-on between Azure Active Directory and Lattice.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with Lattice
+
+In this tutorial, you'll learn how to integrate Lattice with Azure Active Directory (Azure AD). When you integrate Lattice with Azure AD, you can:
+
+* Control in Azure AD who has access to Lattice.
+* Enable your users to be automatically signed-in to Lattice with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lattice single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Lattice supports **SP** and **IDP** initiated SSO.
+
+## Add Lattice from the gallery
+
+To configure the integration of Lattice into Azure AD, you need to add Lattice from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Lattice** in the search box.
+1. Select **Lattice** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Lattice
+
+Configure and test Azure AD SSO with Lattice using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lattice.
+
+To configure and test Azure AD SSO with Lattice, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Lattice SSO](#configure-lattice-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lattice test user](#create-lattice-test-user)** - to have a counterpart of B.Simon in Lattice that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Lattice** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://router.latticehq.com/sso/<subdomain>/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://router.latticehq.com/sso/<subdomain>/acs`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://router.latticehq.com/sso/lattice/sp-login-redirect`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Lattice support team](mailto:customercare@lattice.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Lattice** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lattice.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Lattice**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Lattice SSO
+
+1. Log in to your Lattice company site as an administrator.
+
+1. Go to **Admin** > **Platform** > **Settings** > **Single sign-on settings** and perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/lattice-tutorial/settings.png "Configuration")
+
+ a. In the **XML Metadata** textbox, paste the **Federation Metadata XML** file which you have copied from the Azure portal.
+
+ b. Click **Save**.
+
+### Create Lattice test user
+
+In this section, you create a user called Britta Simon in Lattice. Work with [Lattice support team](mailto:customercare@lattice.com) to add the users in the Lattice platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Lattice Sign-on URL where you can initiate the login flow.
+
+* Go to Lattice Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lattice for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lattice tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lattice for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Lattice you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sketch Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sketch-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Sketch'
+description: Learn how to configure single sign-on between Azure Active Directory and Sketch.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with Sketch
+
+In this tutorial, you'll learn how to integrate Sketch with Azure Active Directory (Azure AD). When you integrate Sketch with Azure AD, you can:
+
+* Control in Azure AD who has access to Sketch.
+* Enable your users to be automatically signed-in to Sketch with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Sketch single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Sketch supports **SP** initiated SSO.
+* Sketch supports **Just In Time** user provisioning.
+
+## Add Sketch from the gallery
+
+To configure the integration of Sketch into Azure AD, you need to add Sketch from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Sketch** in the search box.
+1. Select **Sketch** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Sketch
+
+Configure and test Azure AD SSO with Sketch using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sketch.
+
+To configure and test Azure AD SSO with Sketch, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Sketch SSO](#configure-sketch-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Sketch test user](#create-sketch-test-user)** - to have a counterpart of B.Simon in Sketch that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Sketch** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `sketch-<uuid_v4>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://sso.sketch.com/saml/acs?id=<uuid_v4>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://www.sketch.com`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Sketch support team](mailto:sso-support@sketch.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Sketch application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Sketch application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | first_name | user.givenname |
+ | surname | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Sketch** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Sketch.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Sketch**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Sketch SSO
+
+To configure single sign-on on **Sketch** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sketch support team](mailto:sso-support@sketch.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Sketch test user
+
+In this section, a user called B.Simon is created in Sketch. Sketch supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sketch, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Sketch Sign-on URL where you can initiate the login flow.
+
+* Go to Sketch Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Sketch tile in the My Apps, this will redirect to Sketch Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Sketch you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Skybreathe Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skybreathe-analytics-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Skybreathe® Analytics'
+description: Learn how to configure single sign-on between Azure Active Directory and Skybreathe® Analytics.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with Skybreathe® Analytics
+
+In this tutorial, you'll learn how to integrate Skybreathe® Analytics with Azure Active Directory (Azure AD). When you integrate Skybreathe® Analytics with Azure AD, you can:
+
+* Control in Azure AD who has access to Skybreathe® Analytics.
+* Enable your users to be automatically signed-in to Skybreathe® Analytics with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Skybreathe® Analytics single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Skybreathe® Analytics supports **SP** and **IDP** initiated SSO.
+
+## Add Skybreathe® Analytics from the gallery
+
+To configure the integration of Skybreathe® Analytics into Azure AD, you need to add Skybreathe® Analytics from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Skybreathe® Analytics** in the search box.
+1. Select **Skybreathe® Analytics** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Skybreathe® Analytics
+
+Configure and test Azure AD SSO with Skybreathe® Analytics using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Skybreathe® Analytics.
+
+To configure and test Azure AD SSO with Skybreathe® Analytics, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Skybreathe Analytics SSO](#configure-skybreathe-analytics-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Skybreathe Analytics test user](#create-skybreathe-analytics-test-user)** - to have a counterpart of B.Simon in Skybreathe® Analytics that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Skybreathe® Analytics** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ 1. In the **Identifier** text box, type a URL using the following pattern:
+ `https://auth.skybreathe.com/auth/realms/<ICAO>`
+`
+ 1. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://auth.skybreathe.com/auth/realms/<ICAO>/broker/sbfe-<icao>-idp/endpoint/client/sso`
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in SP initiated mode:
+
+ 1. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://auth.skybreathe.com/auth/realms/<ICAO>/broker/sbfe-<icao>-idp/endpoint`
+
+ 1. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<domain>.skybreathe.com/saml/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Skybreathe® Analytics Client support team](mailto:support@openairlines.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Skybreathe® Analytics application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Skybreathe® Analytics application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstname | user.givenname |
+ | initials | user.employeeid |
+ | lastname | user.surname |
+ | groups | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Skybreathe® Analytics.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Skybreathe® Analytics**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Skybreathe Analytics SSO
+
+To configure single sign-on on **Skybreathe® Analytics** side, you need to send the **App Federation Metadata Url** to [Skybreathe® Analytics support team](mailto:support@openairlines.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Skybreathe Analytics test user
+
+In this section, you create a user called Britta Simon in Skybreathe® Analytics. Work with [Skybreathe® Analytics support team](mailto:support@openairlines.com) to add the users in the Skybreathe® Analytics platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Skybreathe® Analytics Sign-on URL where you can initiate the login flow.
+
+* Go to Skybreathe® Analytics Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Skybreathe® Analytics for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Skybreathe® Analytics tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Skybreathe® Analytics for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Skybreathe® Analytics you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Tigergraph Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tigergraph-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with TigerGraph'
+description: Learn how to configure single sign-on between Azure Active Directory and TigerGraph.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with TigerGraph
+
+In this tutorial, you'll learn how to integrate TigerGraph with Azure Active Directory (Azure AD). When you integrate TigerGraph with Azure AD, you can:
+
+* Control in Azure AD who has access to TigerGraph.
+* Enable your users to be automatically signed-in to TigerGraph with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TigerGraph single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TigerGraph supports **SP** and **IDP** initiated SSO.
+
+## Add TigerGraph from the gallery
+
+To configure the integration of TigerGraph into Azure AD, you need to add TigerGraph from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TigerGraph** in the search box.
+1. Select **TigerGraph** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for TigerGraph
+
+Configure and test Azure AD SSO with TigerGraph using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at TigerGraph.
+
+To configure and test Azure AD SSO with TigerGraph, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TigerGraph SSO](#configure-tigergraph-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TigerGraph test user](#create-tigergraph-test-user)** - to have a counterpart of B.Simon in TigerGraph that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TigerGraph** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit a Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<your-tigergraph-hostname>:14240/gsqlserver/gsql/saml/meta`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<your-tigergraph-hostname>:14240/api/auth/saml/acs`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<your-tigergraph-hostname>:14240/#/login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [TigerGraph support team](mailto:support@tigergraph.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up TigerGraph** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy a configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TigerGraph.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TigerGraph**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TigerGraph SSO
+
+To configure single sign-on on **TigerGraph** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TigerGraph support team](mailto:support@tigergraph.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create TigerGraph test user
+
+In this section, you create a user called Britta Simon at TigerGraph. Work with [TigerGraph support team](mailto:support@tigergraph.com) to add the users in the TigerGraph platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to TigerGraph Sign-on URL where you can initiate the login flow.
+
+* Go to TigerGraph Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the TigerGraph for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the TigerGraph tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TigerGraph for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure TigerGraph you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workhub Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workhub-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with workhub'
+description: Learn how to configure single sign-on between Azure Active Directory and workhub.
++++++++ Last updated : 08/22/2022++++
+# Tutorial: Azure AD SSO integration with workhub
+
+In this tutorial, you'll learn how to integrate workhub with Azure Active Directory (Azure AD). When you integrate workhub with Azure AD, you can:
+
+* Control in Azure AD who has access to workhub.
+* Enable your users to be automatically signed-in to workhub with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* workhub single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* workhub supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add workhub from the gallery
+
+To configure the integration of workhub into Azure AD, you need to add workhub from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **workhub** in the search box.
+1. Select **workhub** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for workhub
+
+Configure and test Azure AD SSO with workhub using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at workhub.
+
+To configure and test Azure AD SSO with workhub, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure workhub SSO](#configure-workhub-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create workhub test user](#create-workhub-test-user)** - to have a counterpart of B.Simon in workhub that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **workhub** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit basic SAML Configuration.](common/edit-urls.png "Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://ainz-okal-gown.firebaseapp.com/__/auth/handler`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://ainz-okal-gown.firebaseapp.com/__/auth/handler`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://admin.workhub.site/sso`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up workhub** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to workhub.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **workhub**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure workhub SSO
+
+To configure single sign-on on **workhub** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [workhub support team](mailto:team_bkp@bitkey.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create workhub test user
+
+In this section, you create a user called Britta Simon at workhub. Work with [workhub support team](mailto:team_bkp@bitkey.jp) to add the users in the workhub platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to workhub Sign-on URL where you can initiate the login flow.
+
+* Go to workhub Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the workhub tile in the My Apps, this will redirect to workhub Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure workhub you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Advisor recommends resizing virtual machines when it's possible to fit the curre
### Burstable recommendations
-We evaluate is workloads are eligible to run on specialized SKUs called **Burstable SKUs** that support variable workload performance requirements and are less expensive than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md).
+We evaluate if workloads are eligible to run on specialized SKUs called **Burstable SKUs** that support variable workload performance requirements and are less expensive than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md).
- A burstable SKU recommendation is made if: - The average **CPU utilization** is less than a burstable SKUs' baseline performance
aks Operator Best Practices Cluster Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-isolation.md
For more information about these features, see [Best practices for authenticatio
### Containers *Containers* include: * The Azure Policy Add-on for AKS to enforce pod security.
-* The use of pod security contexts.
+* The use of pod security admission.
* Scanning both images and the runtime for vulnerabilities. * Using App Armor or Seccomp (Secure Computing) to restrict container access to the underlying node.
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
For even more granular control of container actions, you can also use built-in L
Built-in Linux security features are only available on Linux nodes and pods. > [!NOTE]
-> Currently, Kubernetes environments aren't completely safe for hostile multi-tenant usage. Additional security features, like *AppArmor*, *seccomp*,*Pod Security Policies*, or Kubernetes RBAC for nodes, efficiently block exploits.
+> Currently, Kubernetes environments aren't completely safe for hostile multi-tenant usage. Additional security features, like *Microsoft Defender for Containers* *AppArmor*, *seccomp*,*Pod Security Admission*, or Kubernetes RBAC for nodes, efficiently block exploits.
> >For true security when running hostile multi-tenant workloads, only trust a hypervisor. The security domain for Kubernetes becomes the entire cluster, not an individual node. >
api-management Gateway Log Schema Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/gateway-log-schema-reference.md
The following properties are logged for each API request.
| Property | Type | Description | | - | - | - |
-| method | string | HTTP method of the incoming request |
-| url | string | URL of the incoming request |
-| responseCode | integer | Status code of the HTTP response sent to a client |
-| responseSize | integer | Number of bytes sent to a client during request processing |
-| cache | string | Status of API Management cache involvement in request processing (hit, miss, none) |
-| apiId | string | API entity identifier for current request |
-| operationId | string | Operation entity identifier for current request |
-| clientProtocol | string | HTTP protocol version of the incoming request |
-| clientTime | integer | Number of milliseconds spent on overall client I/O (connecting, sending, and receiving bytes) |
-| apiRevision | string | API revision for current request |
-| clientTlsVersion| string | TLS version used by client sending request |
-| lastError | object | For an unsuccessful request, details about the last request processing error |
-| backendMethod | string | HTTP method of the request sent to a backend |
-| backendUrl | string | URL of the request sent to a backend |
-| backendResponseCode | integer | Code of the HTTP response received from a backend |
-| backedProtocol | string | HTTP protocol version of the request sent to a backend |
-| backendTime | integer | Number of milliseconds spent on overall backend IO (connecting, sending, and receiving bytes) |
+| ApiId | string | API entity identifier for current request |
+| ApimSubscriptionId | string | Subscription entity identifier for current request |
+| ApiRevision | string | API revision for current request |
+| BackendId | string | Backend entity identifier for current request |
+| BackendMethod | string | HTTP method of the request sent to a backend |
+| BackendProtocol | string | HTTP protocol version of the request sent to a backend |
+| BackendRequestBody | string | Backend request body |
+| BackendRequestHeaders | dynamic | Collection of HTTP headers sent to a backend |
+| BackendResponseBody | string | Backend response body |
+| BackendResponseCode | int | Code of the HTTP response received from a backend |
+| BackendResponseHeaders | dynamic | Collection of HTTP headers received from a backend |
+| BackendTime | long | Number of milliseconds spent on overall backend I/O (connecting, sending, and receiving bytes) |
+| BackendUrl | string | URL of the request sent to a backend |
+| Cache | string | Status of API Management cache involvement in request processing (hit, miss, none) |
+| CacheTime | long | Number of milliseconds spent on overall API Management cache IO (connecting, sending and receiving bytes) |
+| ClientProtocol | string | HTTP protocol version of the incoming request |
+| ClientTime | long | Number of milliseconds spent on overall client I/O (connecting, sending, and receiving bytes) |
+| ClientTlsVersion | string | TLS version used by client sending request |
+| Errors | dynamic | Collection of error occurred during request processing |
+| IsRequestSuccess | bool | HTTP request completed with response status code within 2xx or 3xx range |
+| LastErrorElapsed | long | Number of milliseconds elapsed since gateway received request until the error occurred |
+| LastErrorMessage | string | Error message |
+| LastErrorReason | string | Error reason |
+| LastErrorScope | string | Scope of the policy document containing policy caused the error |
+| LastErrorSection | string | Section of the policy document containing policy caused the error |
+| LastErrorSource | string | Naming of the policy or processing internal handler caused the error |
+| Method | string | HTTP method of the incoming request |
+| OperationId | string | Operation entity identifier for current request |
+| ProductId | string | Product entity identifier for current request |
+| RequestBody | string | Client request body |
+| RequestHeaders | dynamic | Collection of HTTP headers sent by a client |
+| RequestSize | int | Number of bytes received from a client during request processing |
+| ResponseBody | string | Gateway response body |
+| ResponseCode | int | Status code of the HTTP response sent to a client |
+| ResponseHeaders | dynamic | Collection of HTTP headers sent to a client |
+| ResponseSize | int | Number of bytes sent to a client during request processing |
+| TotalTime | long | Number of milliseconds spent on overall HTTP request (from first byte received by API Management to last byte a client received back) |
+| TraceRecords | dynamic | Records emitted by trace policies |
+| Url | string | URL of the incoming request |
+| UserId | string | User entity identifier for current request |
## Next steps
api-management Import Container App With Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-container-app-with-oas.md
This article shows how to import an Azure Container App to Azure API Management
> * Import a Container App that exposes a Web API > * Test the API in the Azure portal
-> [!NOTE]
-> Azure Container Apps are currently in preview.
- ## Expose Container App with API Management [Azure Container Apps](../container-apps/overview.md) allows you to deploy containerized apps without managing complex infrastructure. API developers can write code using their preferred programming language or framework, build microservices with full support for Distributed Application Runtime (Dapr), and scale based on HTTP traffic or other events.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL' description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux.-+ ms.devlang: python
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
## 4 - Allow web app to access the database
-After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
+After the Azure Database for PostgreSQL server is created, configure access to the server from the web app by adding a firewall rule. This can be done through the Azure portal or the Azure CLI.
If you're working in VS Code, right-click the database server and select **Open in Portal** to go to the Azure portal. Or, go to the [Azure Cloud Shell](https://shell.azure.com) and run the Azure CLI commands. ### [Azure portal](#tab/azure-portal-access)
Follow these steps while signed-in to the Azure portal to delete a resource grou
| [!INCLUDE [Remove resource group Azure portal 2](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2-240px.png" lightbox="./media/tutorial-python-postgresql-app/remove-resource-group-azure-portal-2.png" alt-text="A screenshot showing how to delete a resource group in the Azure portal." ::: | | [!INCLUDE [Remove resource group Azure portal 3](<./includes/tutorial-python-postgresql-app/remove-resource-group-azure-portal-3.md>)] | | - ### [VS Code](#tab/vscode-aztools) | Instructions | Screenshot |
Follow these steps while signed-in to the Azure portal to delete a resource grou
[!INCLUDE [Stream logs CLI](<./includes/tutorial-python-postgresql-app/clean-up-resources-cli.md>)] --+ Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
applied-ai-services Compose Custom Models V2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v2-1.md
+
+ Title: "How to guide: create and compose custom models with Form Recognizer v2.1"
+
+description: Learn how to create, compose use, and manage custom models with Form Recognizer v2.1
+++++ Last updated : 08/22/2022+
+recommendations: false
++
+# Compose custom models v2.1
+
+> [!NOTE]
+> This how-to guide references Form Recognizer v2.1 . To try Form Recognizer v3.0 , see [Compose custom models v3.0](compose-custom-models-v3.md).
+
+Form Recognizer uses advanced machine-learning technology to detect and extract information from document images and return the extracted data in a structured JSON output. With Form Recognizer, you can train standalone custom models or combine custom models to create composed models.
+
+* **Custom models**. Form Recognizer custom models enable you to analyze and extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases.
+
+* **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model that encompasses your form types. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis.
+
+In this article, you'll learn how to create Form Recognizer custom and composed models using our [Form Recognizer Sample Labeling tool](label-tool.md), [REST APIs](quickstarts/client-library.md?branch=main&pivots=programming-language-rest-api#train-a-custom-model), or [client-library SDKs](quickstarts/client-library.md?branch=main&pivots=programming-language-csharp#train-a-custom-model).
+
+## Sample Labeling tool
+
+Try extracting data from custom forms using our Sample Labeling tool. You'll need the following resources:
+
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
+
+ :::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+> [!div class="nextstepaction"]
+> [Try it](https://fott-2-1.azurewebsites.net/projects/create)
+
+In the Form Recognizer UI:
+
+1. Select **Use Custom to train a model with labels and get key value pairs**.
+
+ :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
+
+1. In the next window, select **New project**:
+
+ :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
+
+## Create your models
+
+The steps for building, training, and using custom and composed models are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom model**](#train-your-custom-model)
+* [**Compose custom models**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents-with-your-custom-or-composed-model)
+* [**Manage your custom models**](#manage-your-custom-models)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer.
+
+## Upload your training dataset
+
+You'll need to [upload your training data](build-training-data-set.md#upload-your-training-data)
+to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Train your custom model
+
+You [train your model](./quickstarts/try-sdk-rest-api.md#train-a-custom-model) with labeled data sets. Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. Start with at least five completed forms of the same type for your labeled training data.
+
+When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
+
+[Get started with Train with labels](label-tool.md)
+
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+
+## Create a composed model
+
+> [!NOTE]
+> **Model Compose is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
+
+With the Model Compose operation, you can assign up to 100 trained custom models to a single model ID. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+Using the Form Recognizer Sample Labeling tool, the REST API, or the Client-library SDKs, follow the steps below to set up a composed model:
+
+1. [**Gather your custom model IDs**](#gather-your-custom-model-ids)
+1. [**Compose your custom models**](#compose-your-custom-models)
+
+#### Gather your custom model IDs
+
+Once the training process has successfully completed, your custom model will be assigned a model ID. You can retrieve a model ID as follows:
+
+### [**Form Recognizer Sample Labeling tool**](#tab/fott)
+
+When you train models using the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net/), the model ID is located in the Train Result window:
++
+### [**REST API**](#tab/rest-api)
+
+The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model) will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
++
+### [**Client-library SDKs**](#tab/sdks)
+
+ The [**client-library SDKs**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-csharp#train-a-custom-model) return a model object that can be queried to return the trained model ID:
+
+* C\# | [CustomFormModel Class](/dotnet/api/azure.ai.formrecognizer.training.customformmodel?view=azure-dotnet&preserve-view=true#properties "Azure SDK for .NET")
+
+* Java | [CustomFormModelInfo Class](/java/api/com.azure.ai.formrecognizer.training.models.customformmodelinfo?view=azure-java-stable&preserve-view=true#methods "Azure SDK for Java")
+
+* JavaScript | [CustomFormModelInfo interface](/javascript/api/@azure/ai-form-recognizer/customformmodelinfo?view=azure-node-latest&preserve-view=true&branch=main#properties "Azure SDK for JavaScript")
+
+* Python | [CustomFormModelInfo Class](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.customformmodelinfo?view=azure-python&preserve-view=true&branch=main#variables "Azure SDK for Python")
+++
+#### Compose your custom models
+
+After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
+
+### [**Form Recognizer Sample Labeling tool**](#tab/fott)
+
+The **Sample Labeling tool** enables you to quickly get started training models and composing them to a single model ID.
+
+After you have completed training, compose your models as follows:
+
+1. On the left rail menu, select the **Model Compose** icon (merging arrow).
+
+1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+When the operation completes, your newly composed model will appear in the list.
+
+ :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="media/custom-model-compose-expanded.png":::
+
+### [**REST API**](#tab/rest-api)
+
+Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Use the programming language code of your choice to create a composed model that will be called with a single model ID. Below are links to code samples that demonstrate how to create a composed model from existing custom models:
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeModel.java).
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_compose_model.py)
+++
+## Analyze documents with your custom or composed model
+
+ The custom form **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You can provide a single custom model ID or a composed model ID for the `modelID` parameter.
+
+### [**Form Recognizer Sample Labeling tool**](#tab/fott)
+
+1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
+
+1. Choose a local file or image URL to analyze.
+
+1. Select the **Run Analysis** button.
+
+1. The tool will apply tags in bounding boxes and report the confidence percentage for each tag.
++
+### [**REST API**](#tab/rest-api)
+
+Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data.
+
+### [**Client-library SDKs**](#tab/sdks)
+
+Using the programming language of your choice to analyze a form or document with a custom or composed model. You'll need your Form Recognizer endpoint, key, and model ID.
+
+* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md)
+
+* [**Java**](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+
+* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/recognizeCustomForm.js)
+
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.1/sample_recognize_custom_forms.py)
+++
+Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
+
+## Manage your custom models
+
+You can [manage your custom models](./quickstarts/try-sdk-rest-api.md#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) from your account.
+
+Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
+
+## Next steps
+
+Learn more about the Form Recognizer client library by exploring our API reference documentation.
+
+> [!div class="nextstepaction"]
+> [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
+>
applied-ai-services Compose Custom Models V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-v3.md
+
+ Title: "How to guide: create and compose custom models with Form Recognizer v2.0"
+
+description: Learn how to create, use, and manage Form Recognizer v2.0 custom and composed models
+++++ Last updated : 08/22/2022+
+recommendations: false
++
+# Compose custom models v3.0
+
+> [!NOTE]
+> This how-to guide references Form Recognizer v3.0 . To use Form Recognizer v2.1 , see [Compose custom models v2.1](compose-custom-models-v2-1.md).
+
+A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 100 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+
+To learn more, see [Composed custom models](concept-composed-models.md).
+
+In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
+
+## Prerequisites
+
+To get started, you'll need the following resources:
+
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
+
+* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+ 1. After the resource deploys, select **Go to resource**.
+
+ 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+
+ :::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL.":::
+
+ > [!TIP]
+ > For more information, see [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
+
+* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create your custom models
+
+First, you'll need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
+
+* [**Assemble your training dataset**](#assemble-your-training-dataset)
+* [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
+* [**Train your custom models**](#train-your-custom-model)
+
+## Assemble your training dataset
+
+Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types (jpg, png, pdf, tiff) and contain both text and handwriting. Your forms must follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer.
+
+>[!TIP]
+> Follow these tips to optimize your data set for training:
+>
+> * If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+> * For filled-in forms, use examples that have all of their fields filled in.
+> * Use forms with different values in each field.
+> * If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+See [Build a training data set](./build-training-data-set.md) for tips on how to collect your training documents.
+
+## Upload your training dataset
+
+When you've gathered a set of training documents, you'll need to [upload your training data](build-training-data-set.md#upload-your-training-data) to an Azure blob storage container.
+
+If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents.
+
+## Train your custom model
+
+When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+
+Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+
+### [Form Recognizer Studio](#tab/studio)
+
+To create custom models, start with configuring your project:
+
+1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
+
+1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
+
+1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
+
+1. Review and submit your settings to create the project.
++
+While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
+
+* Dynamic or variable count of values (rows) for a given set of fields (columns)
+
+* Specific collection of values for a given set of fields (columns and/or rows)
+
+See [Form Recognizer Studio: labeling as tables](quickstarts/try-v3-form-recognizer-studio.md#labeling-as-tables)
+
+### [REST API](#tab/rest)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
+
+Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
+
+Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
++
+### [Client-libraries](#tab/sdks)
+
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you've them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
+
+|Language |Method|
+|--|--|
+|**C#**|[**StartBuildModel**](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startbuildmodel?view=azure-dotnet#azure-ai-formrecognizer-documentanalysis-documentmodeladministrationclient-startbuildmodel&preserve-view=true)|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
+|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)|
+| **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
+++
+## Create a composed model
+
+> [!NOTE]
+> **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
+
+With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Form Recognizer first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
+
+### [Form Recognizer Studio](#tab/studio)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Gather your custom model IDs**](#gather-your-model-ids)
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Gather your model IDs
+
+When you train models using the [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/), the model ID is located in the models menu under a project:
++
+#### Compose your custom models
+
+1. Select a custom models project.
+
+1. In the project, select the ```Models``` menu item.
+
+1. From the resulting list of models, select the models you wish to compose.
+
+1. Choose the **Compose button** from the upper-left corner.
+
+1. In the pop-up window, name your newly composed model and select **Compose**.
+
+1. When the operation completes, your newly composed model will appear in the list.
+
+1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results.
+
+#### Analyze documents
+
+The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
++
+#### Manage your composed models
+
+You can manage your custom models throughout life cycles:
+
+* Test and validate new documents.
+* Download your model to use in your applications.
+* Delete your model when its lifecycle is complete.
++
+### [REST API](#tab/rest)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Compose your custom models**](#compose-your-custom-models)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
++
+#### Compose your custom models
+
+The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
++
+#### Analyze documents
+
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
++
+#### Manage your composed models
+
+You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/DeleteModel) your models.
+
+### [Client-libraries](#tab/sdks)
+
+Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
+
+* [**Create a composed model**](#create-a-composed-model)
+* [**Analyze documents**](#analyze-documents)
+* [**Manage your composed models**](#manage-your-composed-models)
+
+#### Create a composed model
+
+You can use the programming language of your choice to create a composed model:
+
+| Programming language| Code sample |
+|--|--|
+|**C#** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md#create-a-composed-model)
+|**Java** | [Model compose](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md#create-a-composed-model)
+|**JavaScript** | [Compose model](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/composeModel.js)
+|**Python** | [Create composed model](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_create_composed_model.py)
+
+#### Analyze documents
+
+Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Analyze a document with a custom/composed model](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)
+|**Java** | [Analyze forms with your custom/composed model ](https://github.com/Azure/azure-sdk-for-javocumentFromUrl.java)
+|**JavaScript** | [Analyze documents by model ID](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/analyzeReceiptByModelId.js)
+|**Python** | [Analyze custom documents](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_analyze_custom_documents.py)
+
+## Manage your composed models
+
+You can manage a custom model at each stage in its life cycles. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
+
+|Programming language| Code sample |
+|--|--|
+|**C#** | [Analyze a document with a custom/composed model](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzeWithCustomModel.md)|
+|**Java** | [Custom model management operations](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ManageCustomModels.java)|
+|**JavaScript** | [Get model types and schema](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/getModel.js)|
+|**Python** | [Manage models](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_manage_models.py)|
+++
+## Next steps
+
+Try one of our Form Recognizer quickstarts:
+
+> [!div class="nextstepaction"]
+> [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md)
+
+> [!div class="nextstepaction"]
+> [REST API](quickstarts/get-started-v3-sdk-rest-api.md)
+
+> [!div class="nextstepaction"]
+> [C#](quickstarts/get-started-v3-sdk-rest-api.md#prerequisites)
+
+> [!div class="nextstepaction"]
+> [Java](quickstarts/get-started-v3-sdk-rest-api.md)
+
+> [!div class="nextstepaction"]
+> [JavaScript](quickstarts/get-started-v3-sdk-rest-api.md)
+
+> [!div class="nextstepaction"]
+> [Python](quickstarts/get-started-v3-sdk-rest-api.md)
applied-ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-accuracy-confidence.md
Form Recognizer analysis results return an estimated confidence for predicted wo
Field confidence indicates an estimated probability between 0 and 1 that the prediction is correct. For example, a confidence value of 0.95 (95%) indicates that the prediction is likely correct 19 out of 20 times. For scenarios where accuracy is critical, confidence may be used to determine whether to automatically accept the prediction or flag it for human review.
-Confidence scores comprise of 2 components, the field level confidence score and the text extraction confidence score. In addition to the field confidence of position and span, the text extraction confidence in the ```pages``` section of the response is the model's confidence in the text extraction (OCR) process. The two confidence scores should be combined to generate a overall confidence score.
+Confidence scores have two data points: the field level confidence score and the text extraction confidence score. In addition to the field confidence of position and span, the text extraction confidence in the ```pages``` section of the response is the model's confidence in the text extraction (OCR) process. The two confidence scores should be combined to generate one overall confidence score.
**Form Recognizer Studio** </br> **Analyzed invoice prebuilt-invoice model**
The accuracy of your model is affected by variances in the visual structure of y
* Separate visually distinct document types to train different models. * As a general rule, if you remove all user entered values and the documents look similar, you need to add more training data to the existing model.
- * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](compose-custom-models.md#create-a-composed-model) the different variations into a single model.
+ * If the documents are dissimilar, split your training data into different folders and train a model for each variation. You can then [compose](compose-custom-models-v2-1.md#create-a-composed-model) the different variations into a single model.
* Make sure that you don't have any extraneous labels.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
The business card model combines powerful Optical Character Recognition (OCR) ca
## Development options
+The following tools are supported by Form Recognizer v3.0:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-businessCard**|
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Business card model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-business-cards)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-The following tools are supported by Form Recognizer v3.0:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-businessCard**|
- ### Try Form Recognizer See how data, including name, job title, address, email, and company name, is extracted from business cards using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
See how data, including name, job title, address, email, and company name, is ex
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the v3.0 API.
1. On the Form Recognizer Studio home page, select **Business cards**
See how data, including name, job title, address, email, and company name, is ex
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-#### Sample Labeling tool (API v2.1)
-
-You'll need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
-
- 1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
-
- 1. Select **Business card** from the **Form Type** dropdown menu:
-
- :::image type="content" source="media/try-business-card.png" alt-text="Screenshot: Sample Labeling tool dropdown prebuilt model selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements * For best results, provide one clear photo or high-quality scan per document.
You'll need a business card document. You can use our [sample business card docu
|--|:-|:| |Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
-## Field extraction
+## Field extractions
|Name| Type | Description |Standardized output | |:--|:-|:-|:-:|
You'll need a business card document. You can use our [sample business card docu
| WorkPhones | Array of phone numbers | Work phone number(s) from business card | +1 xxx xxx xxxx | | OtherPhones | Array of phone numbers | Other phone number(s) from business card | +1 xxx xxx xxxx |
-## Form Recognizer preview v3.0
+## Form Recognizer v3.0
- The Form Recognizer preview introduces several new features and capabilities.
+ Form Recognizer v3.0 introduces several new features and capabilities.
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
You'll need a business card document. You can use our [sample business card docu
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
recommendations: false
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-06-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
+* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2022-08-31```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results. * For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates. * The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
- |Custom model type | API Version |Custom form 2021-06-30-preview (v3.0)| Custom document 2021-06-30-preview(v3.0) | Custom form GA version (v2.1) or earlier|
+ |Custom model type | API Version |Custom form `2022-08-31` (v3.0)| Custom document `2022-08-31` (v3.0) | Custom form GA version (v2.1) or earlier|
|--|--|--|--|--|
-|**Custom template** (updated custom form)| 2021-06-30-preview | &#10033;| Γ£ô | X |
-|**Custom neural**| trained with current API version (2021-06-30-preview) |Γ£ô |Γ£ô | X |
+|**Custom template** (updated custom form)| v3.0 | &#10033;| Γ£ô | X |
+|**Custom neural**| trained with current API version (`2022-08-31`) |Γ£ô |Γ£ô | X |
|**Custom form**| Custom form GA version (v2.1) or earlier | X | X| ✓| **Table symbols**: ✔—supported; **X—not supported; ✱—unsupported for this API version, but will be supported in a future API version.
With composed models, you can assign multiple custom models to a composed model
## Development options
-The following resources are supported by Form Recognizer **v3.0** (preview):
+The following resources are supported by Form Recognizer **v3.0** :
| Feature | Resources | |-|-|
-|_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Java SDK](quickstarts/try-v3-java-sdk.md)</li><li>[JavaScript SDK](quickstarts/try-v3-javascript-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
-| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startcreatecomposedmodel?view=azure-dotnet-preview&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.begincreatecomposedmodel?view=azure-java-preview&preserve-view=true)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-preview#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python-preview#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
+|_**Custom model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Java SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[JavaScript SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Python SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+| _**Composed model**_| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/ComposeDocumentModel)</li><li>[C# SDK](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startcreatecomposedmodel?view=azure-dotnet&preserve-view=true)</li><li>[Java SDK](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.begincreatecomposedmodel?view=azure-java-stable&preserve-view=true)</li><li>[JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</li><li>[Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)</li></ul>|
The following resources are supported by Form Recognizer v2.1:
The following resources are supported by Form Recognizer v2.1:
Learn to create and compose custom models: > [!div class="nextstepaction"]
-> [**Form Recognizer v2.1 (GA)**](compose-custom-models.md)
+> [**Form Recognizer v2.1**](compose-custom-models-v2-1.md)
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Custom neural models currently only support key-value pairs and selection marks,
## Tabular fields
-With the release of API version **2022-06-30-preview**, custom neural models will support tabular fields (tables):
+With the release of API versions **2022-06-30-preview** and later, custom neural models will support tabular fields (tables):
-* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Models trained with API version 2022-08-31, or later will accept tabular field labels.
* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables. * The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
Tabular fields are also useful when extracting repeating information within a do
## Supported regions
-Starting August 01, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+As of August 01, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
* Brazil South * Canada Central
Starting August 01, 2022, Form Recognizer custom neural model training will only
> [!TIP] > You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed above to **any other region** and use it accordingly. >
-> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/CopyDocumentModelTo) or [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
+> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/CopyDocumentModelTo) or [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region.
## Best practices
Custom neural models are only available in the [v3 API](v3-migration-guide.md).
| Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom document | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom document | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
{ "modelId": "string",
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
* View the REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false # Form Recognizer custom template model
-Custom template (formerly custom form) are easy-to-train models that accurately extract labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
+Custom template (formerly custom form) is an easy-to-train model that accurately extracts labeled key-value pairs, selection marks, tables, regions, and signatures from documents. Template models use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.
Custom template models share the same labeling format and strategy as custom neural models, with support for more field types and languages.
Custom template models support key-value pairs, selection marks, tables, signatu
| Form fields | Selection marks | Tabular fields (Tables) | Signature | Selected regions | |:--:|:--:|:--:|:--:|:--:|
-| Supported| Supported | Supported | Preview | Supported |
+| Supported| Supported | Supported | Supported| Supported |
## Tabular fields
-With the release of API version **2022-06-30-preview**, custom template models will add support for **cross page** tabular fields (tables):
+With the release of API versions **2022-06-30-preview** and later, custom template models will add support for **cross page** tabular fields (tables):
* To label a table that spans multiple pages, label each row of the table across the different pages in a single table. * As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages if you expect to see those variations in documents.
Template models rely on a defined visual template, changes to the template will
## Training a model
-Template models are available generally [v2.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) and in preview [v3 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/BuildDocumentModel). If you're starting with a new project or have an existing labeled dataset, work with the v3 API and Form Recognizer Studio to train a custom template model.
+Template models are available generally [v3.0 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel) and [v2.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm). If you're starting with a new project or have an existing labeled dataset, work with the v3 API and Form Recognizer Studio to train a custom template model.
| Model | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom template (preview) | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom template | [Form Recognizer 2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
+| Custom template | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom template | [Form Recognizer 2.1 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
On the v3 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
{ "modelId": "string",
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
This table provides links to the build mode programming language SDK references
|Programming language | SDK reference | Code sample | ||||
-| C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet-preview&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
+| C#/.NET | [DocumentBuildMode Struct](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentbuildmode?view=azure-dotnet&preserve-view=true#properties) | [Sample_BuildCustomModelAsync.cs](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/tests/samples/Sample_BuildCustomModelAsync.cs)
|Java| [DocumentBuildMode Class](/java/api/com.azure.ai.formrecognizer.administration.models.documentbuildmode?view=azure-java-preview&preserve-view=true#fields) | [BuildModel.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/BuildModel.java)|
-|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-preview&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
-|Python | [DocumentBuildMode Enum](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentbuildmode?view=azure-python-preview&preserve-view=true#fields)| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
+|JavaScript | [DocumentBuildMode type](/javascript/api/@azure/ai-form-recognizer/documentbuildmode?view=azure-node-latest&preserve-view=true)| [buildModel.js](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/buildModel.js)|
+|Python | [DocumentBuildMode Enum](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentbuildmode?view=azure-python&preserve-view=true#fields)| [sample_build_model.py](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_build_model.py)|
## Compare model features
The table below compares custom template and custom neural features:
## Custom model tools
-The following tools are supported by Form Recognizer v2.1:
+The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[Python SDK](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|***custom-model-id***|
-The following tools are supported by Form Recognizer v3.0:
+The following tools are supported by Form Recognizer v2.1:
| Feature | Resources | Model ID| |||:|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|***custom-model-id***|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
+ ### Try Form Recognizer
Try extracting data from your specific or unique documents using custom models.
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot that shows the keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer Studio is available with the preview (v3.0) API.
+> Form Recognizer Studio is available with the v3.0 API.
1. On the **Form Recognizer Studio** home page, select **Custom form**.
Try extracting data from your specific or unique documents using custom models.
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
-#### Sample Labeling tool (API v2.1)
--
-|Feature |Custom Template | Custom Neural |
-|--|--|--|
-|Document structure |Template, fixed form, and structured documents.| Structured, semi-structured, and unstructured documents.|
-|Training time | 1 - 5 minutes | 20 - 60 minutes |
-|Data extraction| Key-value pairs, tables, selection marks, signatures, and regions| Key-value pairs and selections marks.|
-|Models per Document type | Requires one model per each document-type variation| Supports a single model for all document-type variations.|
-|Language support| See [custom template model language support](language-support.md)| The custom neural model currently supports English-language documents only.|
- ## Model capabilities This table compares the supported data extraction areas: |Model| Form fields | Selection marks | Structured fields (Tables) | Signature | Region labeling | |--|:--:|:--:|:--:|:--:|:--:|
-|Custom template| Γ£ö | Γ£ö | Γ£ö |&#10033; | Γ£ö |
+|Custom template| Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
|Custom neural| Γ£ö| Γ£ö |**n/a**| **n/a** | **n/a** |
-**Table symbols**: ✔—supported; ✱—preview; **n/a—currently unavailable
+**Table symbols**: Γ£öΓÇösupported; **n/aΓÇöcurrently unavailable
> [!TIP] > When choosing between the two model types, start with a custom neural model if it meets your functional needs. See [custom neural](concept-custom-neural.md ) to learn more about custom neural models.
The following table describes the features available with the associated tools a
| Document type | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
-
+| Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-v2-1-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
+| Custom template 3.0 | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Form Recognizer 3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)| [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
> [!NOTE] > Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API.
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
## Supported languages and locales
- The Form Recognizer preview version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md).
+ The Form Recognizer v3.0 version introduces more language support for custom models. For a list of supported handwritten and printed text, see [Language support](language-support.md).
-## Form Recognizer v3.0 (preview)
+## Form Recognizer v3.0
- Form Recognizer v3.0 (preview) introduces several new features and capabilities:
+ Form Recognizer v3.0 introduces several new features and capabilities:
* **Custom model API (v3.0)**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
-* [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the preview version in your applications and workflows.
-* [REST API (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument): This API shows you more about the preview version and new capabilities.
+* [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
+* [REST API ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities.
### Try signature detection
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--|
-|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)|
-| [v2.1 quickstart](quickstarts/get-started-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
+|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-08-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|
+| [v2.1 quickstart](quickstarts/get-started-v2-1-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
+
applied-ai-services Concept Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-form-recognizer-studio.md
Title: "Form Recognizer Studio | Preview"
+ Title: "Form Recognizer Studio"
-description: "Concept: Form and document processing, data extraction, and analysis using Form Recognizer Studio (preview)"
+description: "Concept: Form and document processing, data extraction, and analysis using Form Recognizer Studio "
Previously updated : 11/02/2021 Last updated : 08/22/2022 -
-# Form Recognizer Studio (preview)
+# Form Recognizer Studio
->[!NOTE]
-> Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
-
-[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) to get started analyzing documents with pre-trained models. Build custom template models and reference the models in your applications using the [Python SDK preview](quickstarts/try-v3-python-sdk.md) and other quickstarts.
+[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications. Use the [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) to get started analyzing documents with pre-trained models. Build custom template models and reference the models in your applications using the [Python SDK v3.0](quickstarts/get-started-v3-sdk-rest-api.md) and other quickstarts.
The following image shows the Invoice prebuilt model feature at work.
The following image shows the Invoice prebuilt model feature at work.
The following Form Recognizer service features are available in the Studio.
-* **Read**: Try out Form Recognizer's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/try-v3-python-sdk.md).
+* **Read**: Try out Form Recognizer's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://formrecognizer.appliedai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md).
-* **Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/try-v3-python-sdk.md#layout-model).
+* **Layout**: Try out Form Recognizer's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://formrecognizer.appliedai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md#layout-model).
-* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/try-v3-python-sdk.md#general-document-model).
+* **General Documents**: Try out Form Recognizer's General Documents feature to extract key-value pairs and entities. Start with the [Studio General Documents feature](https://formrecognizer.appliedai.azure.com/studio/document). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [General Documents overview](concept-general-document.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model).
-* **Prebuilt models**: Form Recognizer's pre-built models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/try-v3-python-sdk.md#prebuilt-model).
+* **Prebuilt models**: Form Recognizer's pre-built models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model).
-* **Custom models**: Form Recognizer's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Form Recognizer v3.0 preview migration guide](v3-migration-guide.md) to start integrating the new models with your applications.
+* **Custom models**: Form Recognizer's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. Create standalone custom models or combine two or more custom models to create a composed model to extract data from multiple form types. Start with the [Studio Custom models feature](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Use the online wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. See the [Custom models overview](concept-custom.md) to learn more and use the [Form Recognizer v3.0 migration guide](v3-migration-guide.md) to start integrating the new models with your applications.
## Next steps * Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**preview SDK quickstarts**](quickstarts/try-v3-python-sdk.md) to try the preview features in your applications using the new SDKs.
-* Refer to our [**preview REST API quickstarts**](quickstarts/try-v3-rest-api.md) to try the preview features using the new REST API.
+* Explore our [**v3.0 SDK quickstarts**](quickstarts/get-started-v3-sdk-rest-api.md) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](quickstarts/get-started-v3-sdk-rest-api.md) to try the v3.0features using the new REST API.
> [!div class="nextstepaction"]
-> [Form Recognizer Studio (preview) quickstart](quickstarts/try-v3-form-recognizer-studio.md)
+> [Form Recognizer Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md)
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Title: Form Recognizer general document model | Preview
+ Title: Form Recognizer general document model
-description: Concepts related to data extraction and analysis using prebuilt general document preview model
+description: Concepts related to data extraction and analysis using prebuilt general document v3.0 model
Previously updated : 07/20/2022 Last updated : 08/22/2022 recommendations: false <!-- markdownlint-disable MD033 -->
-# Form Recognizer general document model (preview)
+# Form Recognizer general document model
-The General document preview model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the preview (v3.0) API. For more information on using the preview (v3.0) API, see our [migration guide](v3-migration-guide.md).
+The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-migration-guide.md).
The general document API supports most form types and will analyze your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels. > [!NOTE]
-> The ```2022-06-30``` update to the general document model adds support for selection marks.
+> The ```2022-06-30``` and later versions of the general document model add support for selection marks.
## General document features
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | |-|-|
-|🆕 **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
+| **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
### Try Form Recognizer
You'll need the following resources:
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer studio and the general document model are available with the preview (v3.0) API.
+> Form Recognizer studio and the general document model are available with the v3.0 API.
1. On the Form Recognizer Studio home page, select **General documents**
You'll need the following resources:
Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. If you have documents where the same value is described in different ways, for example, customer and user, the associated key will be either customer or user based on context.
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. If you have documents where the same value is described in different ways, for example, customer and user, the associated key will be either customer or user, based on context.
## Data extraction
Keys can also exist in isolation when the model detects that a key exists, with
## Next steps
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
> [!div class="nextstepaction"] > [Try the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
recommendations: false
# Form Recognizer ID document model
-The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extracts key information from US Drivers Licenses (all 50 states and District of Columbia) and international passport biographical pages (excludes visa and other travel documents). The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
+The ID document model combines Optical Character Recognition (OCR) with deep learning models to analyze and extracts key information from US Drivers Licenses (all 50 states and District of Columbia), international passport biographical pages, US state ID, social security card, green card and more. The API analyzes identity documents, extracts key information, and returns a structured JSON data representation.
***Sample U.S. Driver's License processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)***
The ID document model combines Optical Character Recognition (OCR) with deep lea
## Development options
+The following tools are supported by Form Recognizer v3.0:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-idDocument**|
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**ID document model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-identity-id-documents)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-The following tools are supported by Form Recognizer v3.0:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-idDocument**|
- ### Try Form Recognizer
-Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
Extract data, including name, birth date, machine-readable zone, and expiration
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the v3.0 API (API version 2022-08-31 generally available (GA) release)
1. On the Form Recognizer Studio home page, select **Identity documents**
Extract data, including name, birth date, machine-readable zone, and expiration
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
-#### Sample Labeling tool (API v2.1)
-
-You'll need an ID document. You can use our [sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
-
-1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
-
-1. Select **Identity documents** from the **Form Type** dropdown menu:
-
- :::image type="content" source="media/try-id-document.png" alt-text="Screenshot: Sample Labeling tool dropdown prebuilt model selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
-
-## Supported languages and locales v2.1
+## Supported languages and locales
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li></ul></br>|English (United States)ΓÇöen-US|
+|ID document| <ul><li>English (United States)ΓÇöen-US (driver's license)</li><li>Biographical pages from international passports</br> (excluding visa and other travel documents)</li><li>English (United States)ΓÇöen-US (state ID)</li><li>English (United States)ΓÇöen-US (social security card)</li><li>English (United States)ΓÇöen-US (Green card)</li></ul></br>|English (United States)ΓÇöen-US|
-## Field extraction
+## Field extractions
|Name| Type | Description | Standardized output| |:--|:-|:-|:-|
You'll need an ID document. You can use our [sample ID document](https://raw.git
| Address | String | Extracted address (Driver's License only) || | Region | String | Extracted region, state, province, etc. (Driver's License only) | |
-## Form Recognizer preview v3.0
+## Form Recognizer v3.0
- The Form Recognizer preview v3.0 introduces several new features and capabilities:
+ The Form Recognizer v3.0 introduces several new features and capabilities:
* **ID document (v3.0)** prebuilt model supports extraction of endorsement, restriction, and vehicle class codes from US driver's licenses.
-* The ID Document **2022-06-30-preview** release supports the following data extraction from US driver's licenses:
+* The ID Document **2022-06-30** and later releases support the following data extraction from US driver's licenses:
* Date issued * Height
You'll need an ID document. You can use our [sample ID document](https://raw.git
* Hair color * Document discriminator security code
-### ID document preview field extraction
+### ID document field extractions
|Name| Type | Description | Standardized output| |:--|:-|:-|:-|
-| 🆕 DateOfIssue | Date | Issue date | yyyy-mm-dd |
-| 🆕 Height | String | Height of the holder. | |
-| 🆕 Weight | String | Weight of the holder. | |
-| 🆕 EyeColor | String | Eye color of the holder. | |
-| 🆕 HairColor | String | Hair color of the holder. | |
-| 🆕 DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
+| DateOfIssue | Date | Issue date | yyyy-mm-dd |
+| Height | String | Height of the holder. | |
+| Weight | String | Weight of the holder. | |
+| EyeColor | String | Eye color of the holder. | |
+| HairColor | String | Hair color of the holder. | |
+| DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | | | Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| | | VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
You'll need an ID document. You can use our [sample ID document](https://raw.git
| Nationality | countryRegion | Country or region code compliant with ISO 3166 standard (Passport only) | | | Sex | String | Possible extracted values include "M", "F" and "X" | | | MachineReadableZone | Object | Extracted Passport MRZ including two lines of 44 characters each | "P<USABROOKS<<JENNIFER<<<<<<<<<<<<<<<<<<<<<<< 3400200135USA8001014F1905054710000307<715816" |
-| DocumentType | String | Document type, for example, Passport, Driver's License | "passport" |
-| Address | String | Extracted address (Driver's License only) ||
+| DocumentType | String | Document type, for example, Passport, Driver's License, Social security card and more | "passport" |
+| Address | String | Extracted address, address is also parsed to its components - address, city, state, country, zip code ||
| Region | String | Extracted region, state, province, etc. (Driver's License only) | | ### Migration guide and REST API v3.0
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
You'll need an ID document. You can use our [sample ID document](https://raw.git
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
recommendations: false
## Development options
+The following tools are supported by Form Recognizer v3.0:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-invoice**|
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Invoice model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-invoices)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-The following tools are supported by Form Recognizer v3.0:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-invoice**|
### Try Form Recognizer
-See how data, including customer information, vendor details, and line items, is extracted from invoices using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+See how data, including customer information, vendor details, and line items, is extracted from invoices using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including customer information, vendor details, and line items, is
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
1. On the Form Recognizer Studio home page, select **Invoices**
See how data, including customer information, vendor details, and line items, is
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-#### Sample Labeling tool (API v2.1)
-> [!NOTE]
-> Unless you must use API v2.1, it is strongly suggested that you use the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com) for testing purposes instead of the sample labeling tool.
-
-You'll need an invoice document. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf).
-
-1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
-
-1. Select **Invoice** from the **Form Type** dropdown menu:
-
- :::image type="content" source="media/try-invoice.png" alt-text="Screenshot: Sample Labeling tool dropdown prebuilt model selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
You'll need an invoice document. You can use our [sample invoice document](https
|--|:-|:| |Invoice| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US| |Invoice| <ul><li>SpanishΓÇöes</li></ul>| Spanish (United States)ΓÇöes|
-|Invoice (preview)| <ul><li>GermanΓÇöde</li></ul>| German (Germany)-de|
-|Invoice (preview)| <ul><li>FrenchΓÇöfr</li></ul>| French (France)ΓÇöfr|
-|Invoice (preview)| <ul><li>ItalianΓÇöit</li></ul>| Italian (Italy)ΓÇöit|
-|Invoice (preview)| <ul><li>PortugueseΓÇöpt</li></ul>| Portuguese (Portugal)ΓÇöpt|
-|Invoice (preview)| <ul><li>DutchΓÇönl</li></ul>| Dutch (Netherlands)ΓÇönl|
+|Invoice | <ul><li>GermanΓÇöde</li></ul>| German (Germany)-de|
+|Invoice | <ul><li>FrenchΓÇöfr</li></ul>| French (France)ΓÇöfr|
+|Invoice | <ul><li>ItalianΓÇöit</li></ul>| Italian (Italy)ΓÇöit|
+|Invoice | <ul><li>PortugueseΓÇöpt</li></ul>| Portuguese (Portugal)ΓÇöpt|
+|Invoice | <ul><li>DutchΓÇönl</li></ul>| Dutch (Netherlands)ΓÇönl|
## Field extraction
Following are the line items extracted from an invoice in the JSON output respon
The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
-### Key-value pairs (Preview)
+### Key-value pairs
-The prebuilt invoice **2022-06-30-preview** release returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. If you have documents where the same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on context.
-## Form Recognizer preview v3.0
+## Form Recognizer v3.0
- The Form Recognizer preview introduces several new features, capabilities, and AI quality improvements to underlying technologies.
+ The Form Recognizer v3.0 introduces several new features, capabilities, and AI quality improvements to underlying technologies.
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
Keys can also exist in isolation when the model detects that a key exists, with
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0 (Preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0 ](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
> [!div class="nextstepaction"] > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false- # Form Recognizer layout model
The paragraph roles are best used with unstructured documents. Paragraph roles
## Development options
-The following tools are supported by Form Recognizer v2.1:
-
-| Feature | Resources |
-|-|-|
-|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
- The following tools are supported by Form Recognizer v3.0: | Feature | Resources | Model ID | |-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-layout**|
+|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-layout**|
+
+The following tools are supported by Form Recognizer v2.1:
+
+| Feature | Resources |
+|-|-|
+|**Layout API**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/layout-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-layout)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
## Try Form Recognizer
Try extracting data from forms and documents using the Form Recognizer Studio. Y
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-### Form Recognizer Studio (preview)
+### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the v3.0 API.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
The layout model extracts text, selection marks, tables, paragraphs, and paragra
Layout API extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines, if detected, along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+```json
+{
+ "words": [
+ {
+ "content": "CONTOSO",
+ "polygon": [
+ 76,
+ 30,
+ 118,
+ 32,
+ 118,
+ 43,
+ 76,
+ 43
+ ],
+ "confidence": 1,
+ "span": {
+ "offset": 0,
+ "length": 7
+ }
+ }
+ ]
+}
+
+```
+ ### Selection marks Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
+```json
+{
+ "selectionMarks": [
+ {
+ "state": "unselected",
+ "polygon": [
+ 217,
+ 862,
+ 254,
+ 862,
+ 254,
+ 899,
+ 217,
+ 899
+ ],
+ "confidence": 0.995,
+ "span": {
+ "offset": 1421,
+ "length": 12
+ }
+ }
+ ]
+}
++
+```
+ ### Tables and table headers Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
+```json
+{
+ "tables": [
+ {
+ "rowCount": 9,
+ "columnCount": 4,
+ "cells": [
+ {
+ "kind": "columnHeader",
+ "rowIndex": 0,
+ "columnIndex": 0,
+ "columnSpan": 4,
+ "content": "(In millions, except earnings per share)",
+ "boundingRegions": [
+ {
+ "pageNumber": 1,
+ "polygon": [
+ 36,
+ 184,
+ 843,
+ 183,
+ 843,
+ 209,
+ 36,
+ 207
+ ]
+ }
+ ],
+ "spans": [
+ {
+ "offset": 511,
+ "length": 40
+ }
+ ]
+ },
+ ]
+ }
+ .
+ .
+ .
+ ]
+}
+
+```
+ ### Paragraphs The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+```json
+{
+ "paragraphs": [
+ {
+ "spans": [
+ {
+ "offset": 0,
+ "length": 21
+ }
+ ],
+ "boundingRegions": [
+ {
+ "pageNumber": 1,
+ "polygon": [
+ 75,
+ 30,
+ 118,
+ 31,
+ 117,
+ 68,
+ 74,
+ 67
+ ]
+ }
+ ],
+ "content": "Tuesday, Sep 20, YYYY"
+ }
+ ]
+}
+
+```
+ ### Paragraph roles The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
The Layout model may flag certain paragraphs with their specialized type or `rol
| `pageFooter` | Text near the bottom edge of the page | | `pageNumber` | Page number |
+```json
+{
+ "paragraphs": [
+ {
+ "spans": [
+ {
+ "offset": 22,
+ "length": 10
+ }
+ ],
+ "boundingRegions": [
+ {
+ "pageNumber": 1,
+ "polygon": [
+ 139,
+ 10,
+ 605,
+ 8,
+ 605,
+ 56,
+ 139,
+ 58
+ ]
+ }
+ ],
+ "role": "title",
+ "content": "NEWS TODAY"
+ }
+ ]
+}
+
+```
+ ### Select page numbers or ranges for text extraction For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
For large multi-page documents, use the `pages` query parameter to indicate spec
* Explore our REST API:
- > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [!div class="nextstepaction"]
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
| **Model** | **Description** | | | | |**Document analysis**||
-| 🆕[Read (preview)](#read-preview) | Extract typeface and handwritten text lines, words, locations, and detected languages.|
-| 🆕[General document (preview)](#general-document-preview) | Extract text, tables, structure, key-value pairs, and named entities.|
+| [Read](#read) | Extract typeface and handwritten text lines, words, locations, and detected languages.|
+| [General document](#general-document) | Extract text, tables, structure, key-value pairs, and named entities.|
| [Layout](#layout) | Extract text and layout information from documents.| |**Prebuilt**||
-| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
+| [W-2](#w-2) | Extract employee, employer, wage information, etc. from US W-2 forms. |
| [Invoice](#invoice) | Extract key information from English and Spanish invoices. | | [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. | | [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
-### Read (preview)
+### Read
[:::image type="icon" source="media/studio/read-card.png" :::](https://formrecognizer.appliedai.azure.com/studio/read)
The Read API analyzes and extracts ext lines, words, their locations, detected l
> [!div class="nextstepaction"] > [Learn more: read model](concept-read.md)
-### W-2 (preview)
+### W-2
[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
The W-2 model analyzes and extracts key information reported in each box on a W-
> [!div class="nextstepaction"] > [Learn more: W-2 model](concept-w2.md)
-### General document (preview)
+### General document
[:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
The invoice model analyzes and extracts key information from sales invoices. The
* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
-* The preview version v3.0 also supports single-page hotel receipt processing.
+* Version v3.0 also supports single-page hotel receipt processing.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The business card model analyzes and extracts key information from business card
* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
-* The preview version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
+* Version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | ✓ | | | ✓ | | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | | ✓ | | ✓ | | | ✓ |
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | | ✓ | ✓ | ✓ | | ✓ | |
+| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
+| [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô |
+| [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | | [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô | | [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô | ## Input requirements
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
# Form Recognizer Read OCR model
-Form Recognizer v3.0 preview includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+Form Recognizer v3.0 includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+
+> [!NOTE]
+>
+> * Only API Version 2022-06-30-preview supports Microsoft Word, Excel, PowerPoint, and HTML file formats in addition to all other document types supported by the GA versions.
+> * For these file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
## Supported document types
Try extracting text from forms and documents using the Form Recognizer Studio. Y
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-### Form Recognizer Studio (preview)
+### Form Recognizer Studio
> [!NOTE]
-> Currently, Form Recognizer Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read preview.
+> Currently, Form Recognizer Studio doesn't support Microsoft Word, Excel, PowerPoint, and HTML file formats in the Read version v3.0.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
Try extracting text from forms and documents using the Form Recognizer Studio. Y
## Supported languages and locales
-Form Recognizer preview version supports several languages for the read model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
+Form Recognizer v3.0 version supports several languages for the read model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
## Data detection and extraction
Complete a Form Recognizer quickstart:
Explore our REST API: > [!div class="nextstepaction"]
-> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+> [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
# Form Recognizer receipt model
-The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, total tax, and transaction total and returns a structured JSON data representation.
+The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns structured JSON data.
***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The receipt model combines powerful Optical Character Recognition (OCR) capabili
## Development options
+The following tools are supported by Form Recognizer v3.0:
+
+| Feature | Resources | Model ID |
+|-|-|--|
+|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|**prebuilt-receipt**|
+ The following tools are supported by Form Recognizer v2.1: | Feature | Resources | |-|-| |**Receipt model**| <ul><li>[**Form Recognizer labeling tool**](https://fott-2-1.azurewebsites.net/prebuilts-analyze)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-receipts)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-The following tools are supported by Form Recognizer v3.0:
-
-| Feature | Resources | Model ID |
-|-|-|--|
-|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li></ul>|**prebuilt-receipt**|
- ### Try Form Recognizer
-See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including time and date of transactions, merchant information, and
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+#### Form Recognizer Studio
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the v3.0 API.
1. On the Form Recognizer Studio home page, select **Receipts**
See how data, including time and date of transactions, merchant information, and
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-#### Sample Labeling tool (API v2.1)
-
-You'll need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
-
-1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
-
-1. Select **Receipt** from the **Form Type** dropdown menu:
-
- :::image type="content" source="media/try-receipt.png" alt-text="Screenshot: Sample Labeling tool dropdown prebuilt model selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
You'll need a receipt document. You can use our [sample receipt document](https:
| TransactionTime | Time | Time the receipt was issued | hh-mm-ss (24-hour) | | Total | Number (USD)| Full transaction total of receipt | Two-decimal float| | Subtotal | Number (USD) | Subtotal of receipt, often before taxes are applied | Two-decimal float|
- | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30-preview version**. | Two-decimal float |
+ | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30 version**. | Two-decimal float |
| Tip | Number (USD) | Tip included by buyer | Two-decimal float| | Items | Array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
-| Name | String | Item description. **Renamed to "Description" in 2022-06-30-preview version**. | |
+| Name | String | Item description. **Renamed to "Description" in 2022-06-30 version**. | |
| Quantity | Number | Quantity of each item | Two-decimal float | | Price | Number | Individual price of each item unit| Two-decimal float | | TotalPrice | Number | Total price of line item | Two-decimal float |
-## Form Recognizer preview v3.0
+## Form Recognizer v3.0
- The Form Recognizer preview introduces several new features and capabilities. The **Receipt** model supports single-page hotel receipt processing.
+ Form Recognizer v3.0 introduces several new features and capabilities. The **Receipt** model supports single-page hotel receipt processing.
### Hotel receipt field extraction
You'll need a receipt document. You can use our [sample receipt document](https:
### Migration guide and REST API v3.0
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps
You'll need a receipt document. You can use our [sample receipt document](https:
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false
-# Form Recognizer W-2 model | Preview
+# Form Recognizer W-2 model | v3.0
The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2). It's used to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following t
| Feature | Resources | Model ID | |-|-|--|
-|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
+|**W-2 model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|**prebuilt-tax.us.w2**|
### Try Form Recognizer
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need
#### Form Recognizer Studio > [!NOTE]
-> Form Recognizer studio is available with v3.0 preview API.
+> Form Recognizer studio is available with v3.0 API.
1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2**.
Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need
### Migration guide and REST API v3.0
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
+* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
## Next steps * Complete a Form Recognizer quickstart: > [!div class="checklist"] >
-> * [**REST API**](quickstarts/try-v3-rest-api.md)
-> * [**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)
-> * [**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)
-> * [**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)
-> * [**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>
+> * [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)
+> * [**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
+> * [**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
+> * [**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)
+> * [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 # Configure Form Recognizer containers
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Last updated 12/16/2021 keywords: on-premises, Docker, container, identify- # Install and run Form Recognizer v2.1-preview containers
> > * The online request form requires that you provide a valid email address that belongs to the organization that owns the Azure subscription ID and that you have or have been granted access to that subscription.
-Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine-learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
+Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine-learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you'll also need the **Read** OCR container).
The following table lists the supporting container(s) for each Form Recognizer c
#### Recommended CPU cores and memory
-> [!Note]
+> [!NOTE]
+>
> The minimum and recommended values are based on Docker limits and *not* the host machine resources. ##### Read, Layout, and Prebuilt containers
Azure Cognitive Services containers aren't licensed to run without being connect
### Connect to Azure
-The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Cognitive Services container FAQ](../../../cognitive-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
+The container needs the billing argument values to run. These values allow the container to connect to the billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to Azure within the allowed time window, the container continues to run, but doesn't serve queries until the billing endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the [Cognitive Services container FAQ](../../../cognitive-services/containers/container-faq.yml#how-does-billing-work) for an example of the information sent to Microsoft for billing.
### Billing arguments
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false #Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal.
Let's get started:
1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
-1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
+1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button, on the left navigation bar, and retrieve them there.
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
That's it! You're now ready to start automating data extraction using Azure Form
* Complete a Form Recognizer quickstart and get started creating a document processing app in the development language of your choice:
- * [C#](quickstarts/try-v3-csharp-sdk.md)
- * [Python](quickstarts/try-v3-python-sdk.md)
- * [Java](quickstarts/try-v3-java-sdk.md)
- * [JavaScript](quickstarts/try-v3-javascript-sdk.md)
+ * [C#](quickstarts/get-started-v3-sdk-rest-api.md)
+ * [Python](quickstarts/get-started-v3-sdk-rest-api.md)
+ * [Java](quickstarts/get-started-v3-sdk-rest-api.md)
+ * [JavaScript](quickstarts/get-started-v3-sdk-rest-api.md)
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
The SAS URL includes a special set of [query parameters](/rest/api/storageservic
### REST API
-To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/BuildDocumentModel), add the SAS URL to the request body:
+To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/BuildDocumentModel), add the SAS URL to the request body:
```json {
To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.c
} ```
-### Sample Labeling Tool
-
-To use your SAS URL with the [Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net/connections/create), add the SAS URL to the **Connection Settings** → **Azure blob container** → **SAS URI** field:
-
- :::image type="content" source="media/sas-tokens/fott-add-sas-uri.png" alt-text="Screenshot that shows the SAS URI field.":::
- That's it! You've learned how to create SAS tokens to authorize how clients access your data. ## Next step
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the v3.0 version.
> [!NOTE] > The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the sample labeling tool for yourself.
applied-ai-services Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/disaster-recovery.md
The process for copying a custom model consists of the following steps:
1. Next you send the copy request to the source resource&mdash;the resource that contains the model to be copied with the payload (copy authorization) returned from the previous call. You'll get back a URL that you can query to track the progress of the operation. 1. You'll use your source resource credentials to query the progress URL until the operation is a success. You can also query the new model ID in the target resource to get the status of the new model.
-### [Form Recognizer REST API v3.0 (Preview)](#tab/v30)
+### [Form Recognizer REST API v3.0 ](#tab/v30)
## Generate Copy authorization request The following HTTP request gets copy authorization from your target resource. You'll need to enter the endpoint and key of your target resource as headers. ```http
-POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/documentModels:authorizeCopy?api-version=2022-06-30-preview
+POST https://{TARGET_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
Ocp-Apim-Subscription-Key: {TARGET_FORM_RECOGNIZER_RESOURCE_KEY} ```
You'll get a `200` response code with response body that contains the JSON paylo
The following HTTP request starts the copy operation on the source resource. You'll need to enter the endpoint and key of your source resource as the url and header. Notice that the request URL contains the model ID of the source model you want to copy. ```http
-POST {{source-endpoint}}formrecognizer/documentModels/{model-to-be-copied}:copyTo?api-version=2022-06-30-preview
+POST {{source-endpoint}}formrecognizer/documentModels/{model-to-be-copied}:copyTo?api-version=2022-08-31
Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY} ```
You'll get a `202\Accepted` response with an Operation-Location header. This val
```http HTTP/1.1 202 Accepted
-Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-06-30-preview
+Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
```
-### [Form Recognizer REST API v2.1 (GA)](#tab/v21)
+### [Form Recognizer REST API v2.1 ](#tab/v21)
## Generate Copy authorization request
Operation-Location: https://{SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT}/formrecog
## Track Copy progress
-### [Form Recognizer v3.0 (Preview)](#tab/v30)
+### [Form Recognizer v3.0 ](#tab/v30)
```
-GET https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-06-30-preview
+GET https://{source-resource}.cognitiveservices.azure.com/formrecognizer/operations/{operation-id}?api-version=2022-08-31
Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY} ```
-### [Form Recognizer v2.1 (GA)](#tab/v21)
+### [Form Recognizer v2.1 ](#tab/v21)
Track your progress by querying the **Get Copy Model Result** API against the source resource endpoint.
curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v
## Next steps In this guide, you learned how to use the Copy API to back up your custom models to a secondary Form Recognizer resource. Next, explore the API reference docs to see what else you can do with Form Recognizer.
-* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Form Recognizer Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/form-recognizer-studio-overview.md
Previously updated : 07/18/2022 Last updated : 08/22/2022 recommendations: false
recommendations: false
<!-- markdownlint-disable MD033 --> # What is Form Recognizer Studio?
->[!NOTE]
-> Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
- Form Recognizer Studio is an online tool to visually explore, understand, train, and integrate features from the Form Recognizer service into your applications. The studio provides a platform for you to experiment with the different Form Recognizer models and sample their returned data in an interactive manner without the need to write code.
-The studio supports all Form Recognizer v3.0 models and v2.1 models with labeled data. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+The studio supports Form Recognizer v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
## Get started using Form Recognizer Studio
The studio supports all Form Recognizer v3.0 models and v2.1 models with labeled
:::image type="content" source="media/studio/form-recognizer-studio-front.png" alt-text="Screenshot of Form Recognizer Studio front page.":::
-1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md) or [**Python**](quickstarts/try-v3-python-sdk.md) client libraries or the [**REST API**](quickstarts/try-v3-rest-api.md) to get started incorporating Form Recognizer models into your own applications.
+1. After you've tried Form Recognizer Studio, use the [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md) or [**Python**](quickstarts/get-started-v3-sdk-rest-api.md) client libraries or the [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) to get started incorporating Form Recognizer models into your own applications.
To learn more about each model, *see* concepts pages.
applied-ai-services Try V2 1 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/try-v2-1-sdk-rest-api.md
+
+ Title: "Use Form Recognizer client library SDKs or REST API"
+
+description: How to use a Form Recognizer client libraries or REST API to create apps that extracts key-value pairs and table data from your custom documents.
+++++ Last updated : 02/01/2022+
+zone_pivot_groups: programming-languages-set-formre
+recommendations: false
+++
+# Use Form Recognizer SDKs or REST API
+
+ In this how-to guide, you'll learn how to add Form Recognizer to your applications and workflows using an SDK, in a programming language of your choice, or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+You'll use the following APIs to extract structured data from forms and documents:
+
+* [Authenticate the client](#authenticate-the-client)
+* [Analyze Layout](#analyze-layout)
+* [Analyze receipts](#analyze-receipts)
+* [Analyze business cards](#analyze-business-cards)
+* [Analyze invoices](#analyze-invoices)
+* [Analyze ID documents](#analyze-id-documents)
+* [Train a custom model](#train-a-custom-model)
+* [Analyze forms with a custom model](#analyze-forms-with-a-custom-model)
+* [Manage custom models](#manage-custom-models)
+++++++++++++++
applied-ai-services Use Prebuilt Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-prebuilt-read.md
recommendations: false
The read model is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
->[!NOTE]
-> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-06-30```.
+The current API version is ```2022-08-31```.
::: zone pivot="programming-language-csharp"
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the V3.0.
In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
In the Sample Labeling tool, projects store your configurations and settings. Cr
When you create or open a project, the main tag editor window opens. The tag editor consists of three parts:
-* A resizable preview pane that contains a scrollable list of forms from the source connection.
+* A resizable v3.0 pane that contains a scrollable list of forms from the source connection.
* The main editor pane that allows you to apply tags. * The tags editor pane that allows users to modify, lock, reorder, and delete tags.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
This article covers the supported languages for text and field **extraction (by
## Read, layout, and custom form (template) model
-The following lists include the currently GA languages in for the v2.1 version and the most recent v3.0 preview. These languages are supported by Read, Layout, and Custom form (template) model features.
+The following lists include the currently GA languages in for the v2.1 version and the most recent v3.0 version. These languages are supported by Read, Layout, and Custom form (template) model features.
> [!NOTE] > **Language code optional** > > Form Recognizer's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and do not require specifying a language code. Do not provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
-To use the preview languages, refer to the [v3.0 REST API migration guide](/rest/api/medi).
+To use the v3.0-supported languages, refer to the [v3.0 REST API migration guide](/rest/api/medi).
-### Handwritten text (preview and GA)
+### Handwritten text (v3.0 and v2.1)
The following table lists the supported languages for extracting handwritten texts. |Language| Language code (optional) | Language| Language code (optional) | |:--|:-:|:--|:-:|
-|English|`en`|Japanese (preview) |`ja`|
-|Chinese Simplified (preview) |`zh-Hans`|Korean (preview)|`ko`|
-|French (preview) |`fr`|Portuguese (preview)|`pt`|
-|German (preview) |`de`|Spanish (preview) |`es`|
-|Italian (preview) |`it`|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
-### Print text (preview)
+### Print text
-This section lists the supported languages for extracting printed texts in the latest preview.
+This section lists the supported languages for extracting printed texts using version v3.0.
|Language| Code (optional) |Language| Code (optional) | |:--|:-:|:--|:-:|
This section lists the supported languages for extracting printed texts in the l
|Kurukh (Devanagari) | `kru`|Welsh | `cy` |Kyrgyz (Cyrillic) | `ky`
-### Print text (GA)
+### Print text
This section lists the supported languages for extracting printed texts in the latest GA version.
Business Card supports all English business cards with the following locales:
|English (India|`en-in`| |English (United States)| `en-us`|
-The **2022-06-30-preview** release includes Japanese language support:
+The **2022-06-30** and later releases include Japanese language support:
|Language| Locale code | |:--|:-:|
Language| Locale code |
|:--|:-:| |English (United States) |en-US| |Spanish| es|
-|German (**2022-06-30-preview**)| de|
-|French (**2022-06-30-preview**)| fr|
-|Italian (**2022-06-30-preview**)|it|
-|Portuguese (**2022-06-30-preview**)|pt|
-|Dutch (**2022-06-30-preview**)| nl|
+|German (**2022-06-30** and later)| de|
+|French (**2022-06-30** and later)| fr|
+|Italian (**2022-06-30** and later)|it|
+|Portuguese (**2022-06-30** and later)|pt|
+|Dutch (**2022-06-30** and later)| nl|
## ID document model
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
This how-to guide will walk you through the process of enabling secure connectio
* Communication between a client application within a Virtual Network (VNET) and your Form Recognizer Resource.
-* Communication between Form Recognizer Studio or the sample labeling tool (FOTT) and your Form Recognizer resource.
+* Communication between Form Recognizer Studio and your Form Recognizer resource.
* Communication between your Form Recognizer resource and a storage account (needed when training a custom model).
applied-ai-services Overview Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview-experiment.md
Previously updated : 07/20/2022 Last updated : 08/22/2022 recommendations: false
This section will help you decide which Form Recognizer v3.0 supported model you
| Type of document | Data to extract |Document format | Your best solution | | --|-| -|-|
-|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read (preview) model**](concept-read.md)|
+|**A text-based document** like a contract or letter.|You want to extract primarily text lines, words, locations, and detected languages.|</li></ul>The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).| [**Read model**](concept-read.md)|
|**A document that includes structural information** like a report or study.|In addition to text, you need to extract structural information like tables, selection marks, paragraphs, titles, headings, and subheadings.|The document is written or printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model)| [**Layout model**](concept-layout.md)
-|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document (preview) model**](concept-general-document.md)
+|**A structured or semi-structured document that includes content formatted as fields and values**, like a credit application or survey form.|You want to extract fields and values including ones not covered by the scenario-specific prebuilt models **without having to train a custom model**.| The form or document is a standardized format commonly used in your business or industry and printed in a [supported language](language-support.md#read-layout-and-custom-form-template-model).|[**General document model**](concept-general-document.md)
|**U.S. W-2 form**|You want to extract key information such as salary, wages, and taxes withheld from US W2 tax forms.</li></ul> |The W-2 document is in United States English (en-US) text.|[**W-2 model**](concept-w2.md) |**Invoice**|You want to extract key information such as customer name, billing address, and amount due from invoices.</li></ul> |The invoice document is written or printed in a [supported language](language-support.md#invoice-model).|[**Invoice model**](concept-invoice.md) |**Receipt**|You want to extract key information such as merchant name, transaction date, and transaction total from a sales or single-page hotel receipt.</li></ul> |The receipt is written or printed in a [supported language](language-support.md#receipt-model). |[**Receipt model**](concept-receipt.md)|
This section will help you decide which Form Recognizer v3.0 supported model you
## Form Recognizer models and development options
-### [Form Recognizer preview (v3.0)](#tab/v3-0)
+### [Form Recognizer v3.0](#tab/v3-0)
The following models and development options are supported by the Form Recognizer service v3.0. You can Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities. Use the links in the table to learn more about each model and browse the API references. | Model | Description |Automation use cases | Development options | |-|--|-|--|
-|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
-|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
-|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
+|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.| <ul><li>Contract processing. </li><li>Financial or medical report processing.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, and key-value pairs.|<ul><li>Key-value pair extraction.</li><li>Form processing.</li><li>Survey data collection and analysis.</li></ul>|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. |<ul><li>Document indexing and retrieval by structure.</li><li>Preprocessing prior to OCR analysis.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br>Custom model API v3.0 now supports two model types:ul><li>[**Custom Template model**](concept-custom-template.md) (custom form) is used to analyze structured and semi-structured documents.</li><li> [**Custom Neural model**](concept-custom-neural.md) (custom document) is used to analyze unstructured documents.</li></ul>|<ul><li>Identification and compilation of data, unique to your business, impacted by a regulatory change or market event.</li><li>Identification and analysis of previously overlooked unique data.</li></ul> |[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul><li>Automated tax document management.</li><li>Mortgage loan application processing.</li></ul> |<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. |<ul><li>Accounts payable processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.|<ul><li>Expense management.</li><li>Consumer behavior data analysis.</li><li>Customer loyalty program.</li><li>Merchandise return processing.</li><li>Automated tax recording and reporting.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li>Know your customer (KYC) financial services guidelines compliance.</li><li>Medical account management.</li><li>Identity checkpoints and gateways.</li><li>Hotel registration.</li></ul> |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.|<ul><li>Sales lead and marketing management.</li></ul> |<ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
### [Form Recognizer GA (v2.1)](#tab/v2-1) >[!TIP] >
- > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+ > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
The following models are supported by Form Recognizer v2.1. Use the links in the
| Model| Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
This documentation contains the following article types:
> [!div class="checklist"] > > * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more.
+> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more.
> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. ### [Form Recognizer v2.1](#tab/v2-1)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 06/06/2022 Last updated : 08/22/2022 recommendations: false adobe-target: true
This section helps you decide which Form Recognizer v3.0 supported feature you s
| What type of document do you want to analyze?| How is the document formatted? | Your best solution | | --|-| -|
-|<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
+|<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>|
|<ul><li>**Primarily text content**</li></yl>| Is your document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) and are you only interested in text and not tables, selection marks, and the structure?|<ul><li>If **Yes** to text-only extraction, use the [**Read**](concept-read.md) model.<li>If **No**, because you also need structure information, use the [**Layout**](concept-layout.md) model.</li></ul>
-|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document (preview)**](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul>
-|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
-|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
- |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
+|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document **](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul>
+|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>
+|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model.</li></ul>|
+|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md) model</li></ul>|
+ |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document **](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
## Form Recognizer features and development options
-### [Form Recognizer preview (v3.0)](#tab/v3-0)
+### [Form Recognizer v3.0](#tab/v3-0)
The following features and development options are supported by the Form Recognizer service v3.0. Use the links in the table to learn more about each feature and browse the API references. | Feature | Description | Development options | |-|--|-|
-|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
-|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
-|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
-|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
-|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
+|[ **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul> |
+|[ **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul> |
+|[**General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#layout-model)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
+|[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/get-started-v3-sdk-rest-api.md)</li><li>[**C# SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md#prebuilt-model)</li></ul>|
### [Form Recognizer GA (v2.1)](#tab/v2-1)
The following features are supported by Form Recognizer v2.1. Use the links in t
| Feature | Description | Development options | |-|--|-|
-|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Layout API**](concept-layout.md) | Extraction and analysis of text, selection marks, tables, and bounding box coordinates, from forms and documents. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-layout)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-layout-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?branch=main&tabs=layout#run-the-container-with-the-docker-compose-up-command)</li></ul>|
|[**Custom model**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model)</li><li>[**REST API**](quickstarts/try-sdk-rest-api.md)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
-|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](quickstarts/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=invoice#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Receipt model**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=receipt#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**ID document model**](concept-id-document.md) | Automated data processing and extraction of key information from US driver's licenses and international passports.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=id-document#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+|[**Business card model**](concept-business-card.md) | Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer labeling tool**](quickstarts/try-sample-label-tool.md#analyze-using-a-prebuilt-model)</li><li>[**REST API**](quickstarts/get-started-v2-1-sdk-rest-api.md#try-it-prebuilt-model)</li><li>[**Client-library SDK**](how-to-guides/try-sdk-rest-api.md)</li><li>[**Form Recognizer Docker container**](containers/form-recognizer-container-install-run.md?tabs=business-card#run-the-container-with-the-docker-compose-up-command)</li></ul>|
This documentation contains the following article types:
> [!div class="checklist"] > > * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more.
+> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument) to learn more.
> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. ### [Form Recognizer v2.1](#tab/v2-1)
applied-ai-services Get Started V2 1 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-v2-1-sdk-rest-api.md
+
+ Title: "Quickstart: Form Recognizer client library SDKs | REST API"
+
+description: Use the Form Recognizer client library SDKs or REST API to create a forms processing app that extracts key/value pairs and table data from your custom documents.
+++++ Last updated : 06/21/2021+
+zone_pivot_groups: programming-languages-set-formre
+recommendations: false
++
+# Get started with Form Recognizer client library SDKs or REST API
+
+Get started with Azure Form Recognizer using the programming language of your choice. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
+++++++++++++++
applied-ai-services Get Started V3 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/get-started-v3-sdk-rest-api.md
+
+ Title: "Quickstart: Form Recognizer SDKs | REST API v3.0"
+
+description: Use a Form Recognizer SDK or the REST API to create a forms processing app that extracts key data from your documents.
+++++ Last updated : 08/22/2022+
+zone_pivot_groups: programming-languages-set-formre
+recommendations: false
++
+# Quickstart: Form Recognizer SDKs | REST API v3.0
+
+Get started with the latest version of Azure Form Recognizer. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Form Recognizer models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+
+To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
++++++++++++++++
+That's it, congratulations!
+
+In this quickstart, you used a form Form Recognizer model to analyze various forms and documents. Next, explore the Form Recognizer Studio and reference documentation to learn about Form Recognizer API in depth.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Try the Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+> [!div class="nextstepaction"]
+> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
keywords: document processing
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](try-v3-rest-api.md) or [**C#**](try-v3-csharp-sdk.md), [**Java**](try-v3-java-sdk.md), [**JavaScript**](try-v3-javascript-sdk.md), or [Python](try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+> * *See* our [**REST API**](get-started-v3-sdk-rest-api.md) or [**C#**](get-started-v3-sdk-rest-api.md), [**Java**](get-started-v3-sdk-rest-api.md), [**JavaScript**](get-started-v3-sdk-rest-api.md), or [Python](get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with the v3.0 version.
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
Title: "Quickstart: Form Recognizer Studio | Preview"
+ Title: "Quickstart: Form Recognizer Studio | v3.0"
-description: Form and document processing, data extraction, and analysis using Form Recognizer Studio (preview)
+description: Form and document processing, data extraction, and analysis using Form Recognizer Studio
-# Get started: Form Recognizer Studio | Preview
+# Get started: Form Recognizer Studio | v3.0
->[!NOTE]
-> Form Recognizer Studio is currently in public preview. Some features may not be supported or have limited capabilities.
-[Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK preview](try-v3-python-sdk.md) and other quickstarts.
+[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service in your applications. You can get started by exploring the pre-trained models with sample or your own documents. You can also create projects to build custom template models and reference the models in your applications using the [Python SDK](get-started-v3-sdk-rest-api.md) and other quickstarts.
:::image border="true" type="content" source="../media/quickstarts/form-recognizer-demo-preview3.gif" alt-text="Selecting the Layout API to analyze a newspaper document in the Form Recognizer Studio.":::
Prebuilt models help you add Form Recognizer features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. The following prebuilt models are currently supported by Form Recognizer:
-* [🆕 **General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs and named entities.
-* [🆕**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
-* [🆕 **Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
+* [**General document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=document): extract text, tables, structure, key-value pairs and named entities.
+* [**W-2**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2): extract text and key information from W-2 tax forms.
+* [ **Read**](https://formrecognizer.appliedai.azure.com/studio/read): extract text lines, words, their locations, detected languages, and handwritten style if detected from documents (PDF, TIFF) and images (JPG, PNG, BMP).
* [**Layout**](https://formrecognizer.appliedai.azure.com/studio/layout): extract text, tables, selection marks, and structure information from documents (PDF, TIFF) and images (JPG, PNG, BMP). * [**Invoice**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice): extract text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt): extract text and key information from receipts.
To label for signature detection: (Custom form only)
## Next steps * Follow our [**Form Recognizer v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API.
-* Explore our [**preview SDK quickstarts**](try-v3-python-sdk.md) to try the preview features in your applications using the new SDKs.
-* Refer to our [**preview REST API quickstarts**](try-v3-rest-api.md) to try the preview features using the new RESt API.
+* Explore our [**v3.0 SDK quickstarts**](get-started-v3-sdk-rest-api.md) to try the v3.0 features in your applications using the new SDKs.
+* Refer to our [**v3.0 REST API quickstarts**](get-started-v3-sdk-rest-api.md) to try the v3.0 features using the new REST API.
-[Get started with the Form Recognizer Studio preview](https://formrecognizer.appliedai.azure.com).
+[Get started with the Form Recognizer Studio](https://formrecognizer.appliedai.azure.com).
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
- Title: "Quickstart: Form Recognizer REST API v3.0 | Preview"-
-description: Form and document processing, data extraction, and analysis using Form Recognizer REST API v3.0 (preview)
----- Previously updated : 06/28/2022---
-# Get started: Form Recognizer REST API 2022-06-30-preview
-
-<!-- markdownlint-disable MD036 -->
-
->[!NOTE]
-> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is **2022-06-30-preview**.
-
-| [Form Recognizer REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
-
-Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.
-
-## Form Recognizer models
-
- The REST API supports the following models and capabilities:
-
-**Document Analysis**
-
-* 🆕 Read—Analyze and extract printed (typeface) and handwritten text lines, words, locations, and detected languages.
-* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks from documents, without the need to train a model.
-
-**Prebuilt Models**
-
-* 🆕 W-2—Analyze and extract fields from US W-2 tax documents (used to report income), using a pre-trained W-2 model.
-* InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model.
-* ReceiptsΓÇöAnalyze and extract common fields from receipts, using a pre-trained receipt model.
-* ID documentsΓÇöAnalyze and extract common fields from ID documents like passports or driver's licenses, using a pre-trained ID documents model.
-* Business CardsΓÇöAnalyze and extract common fields from business cards, using a pre-trained business cards model.
-
-**Custom Models**
-
-* CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types.
-* Composed customΓÇöCompose a collection of custom models and assign them to a single model ID.
-
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-
-* curl command line tool installed.
-
- * [Windows](https://curl.haxx.se/windows/)
- * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows)
-
-* **PowerShell version 7.*+** (or a similar command-line application.):
- * [Windows](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true)
- * [macOS](/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.2&preserve-view=true)
- * [Linux](/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.2&preserve-view=true)
-
-* To check your PowerShell version, type the following:
- * Windows: `Get-Host | Select-Object Version`
- * macOS or Linux: `$PSVersionTable`
-
-* A Form Recognizer (single-service) or Cognitive Services (multi-service) resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
-
-> [!TIP]
-> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
-
-* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart:
-
- :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-
-## Analyze documents and get results
-
- A POST request is used to analyze documents with a prebuilt or custom model. A GET request is used to retrieve the result of a document analysis call. The `modelId` is used with POST and `resultId` with GET operations.
-
-### Analyze document (POST Request)
-
-Before you run the cURL command, make the following changes:
-
-1. Replace `{endpoint}` with the endpoint value from your Azure portal Form Recognizer instance.
-
-1. Replace `{key}` with the key value from your Azure portal Form Recognizer instance.
-
-1. Using the table below as a reference, replace `{modelID}` and `{your-document-url}` with your desired values.
-
-1. You'll need a document file at a URL. For this quickstart, you can use the sample forms provided in the table below for each feature.
-
-> [!IMPORTANT]
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article for more information.
-
-#### POST request
-
-```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
-```
-
-#### Reference table
-
-| **Feature** | **{modelID}** | **{your-document-url}** |
-| | |--|
-| General Document | prebuilt-document | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) |
-| Read | prebuilt-read | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/read.png) |
-| Layout | prebuilt-layout | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png) |
-| W-2 | prebuilt-tax.us.w2 | [Sample W-2](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/w2.png) |
-| Invoices | prebuilt-invoice | [Sample invoice](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/raw/master/curl/form-recognizer/rest-api/invoice.pdf) |
-| Receipts | prebuilt-receipt | [Sample receipt](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/receipt.png) |
-| ID Documents | prebuilt-idDocument | [Sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/identity_documents.png) |
-| Business Cards | prebuilt-businessCard | [Sample business card](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/de5e0d8982ab754823c54de47a47e8e499351523/curl/form-recognizer/rest-api/business_card.jpg) |
-
-#### POST response
-
-You'll receive a `202 (Success)` response that includes an **Operation-location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
--
-### Get analyze results (GET Request)
-
-After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
-
-1. Replace `{POST response}` Operation-location header from the [POST response](#post-response).
-
-1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-
-<!-- markdownlint-disable MD024 -->
-
-#### GET request
-
-```bash
-curl -v -X GET "{POST response}" -H "Ocp-Apim-Subscription-Key: {key}"
-```
-
-#### Examine the response
-
-You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-
-#### Sample response for prebuilt-invoice
-
-```json
-{
- "status": "succeeded",
- "createdDateTime": "2022-03-25T19:31:37Z",
- "lastUpdatedDateTime": "2022-03-25T19:31:43Z",
- "analyzeResult": {
- "apiVersion": "2022-06-30",
- "modelId": "prebuilt-invoice",
- "stringIndexType": "textElements"...
- ..."pages": [
- {
- "pageNumber": 1,
- "angle": 0,
- "width": 8.5,
- "height": 11,
- "unit": "inch",
- "words": [
- {
- "content": "CONTOSO",
- "boundingBox": [
- 0.5911,
- 0.6857,
- 1.7451,
- 0.6857,
- 1.7451,
- 0.8664,
- 0.5911,
- 0.8664
- ],
- "confidence": 1,
- "span": {
- "offset": 0,
- "length": 7
- }
- },
-}
-```
-
-#### Supported document fields
-
-The prebuilt models extract pre-defined sets of document fields. See [Model data extraction](../concept-model-overview.md#model-data-extraction) for extracted field names, types, descriptions, and examples.
-
-## Next steps
-
-In this quickstart, you used the Form Recognizer REST API preview (v3.0) to analyze forms in different ways. Next, further explore the Form Recognizer Studio and latest reference documentation to learn more about the Form Recognizer API.
-
->[!div class="nextstepaction"]
-> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio)
-
-> [!div class="nextstepaction"]
-> [REST API preview (v3.0) reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
+
+ Title: About the Form Recognizer SDK?
+
+description: The Form Recognizer software development kit (SDK) exposes Form Recognizer models, features and capabilities, making it easier to develop document-processing applications.
+++++ Last updated : 08/22/2022+
+recommendations: false
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD023 -->
+
+# What is the Form Recognizer SDK?
+
+Azure Cognitive Services Form Recognizer is a cloud service that uses machine learning to analyze text and structured data from documents. The Form Recognizer software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Form Recognizer models and capabilities into your applications. Form Recognizer SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages.
+
+## Supported languages
+
+Form Recognizer SDK supports the following languages and platforms:
+
+> [!NOTE]
+>
+> * Form Recognizer SDKs are currently not supported by Form Recognizer REST API 2022-08-31.
+> * [REST API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) and earlier releases support the current SDKs.
+
+| Programming language/SDK | Package| Azure SDK client-library |Supported API version| Platform support |
+|:-:|:-|:-| :-|--|
+|[C#/4.0.0-beta.5](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-csharp#set-up)| [NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5) | [Azure SDK for .NET](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0-beta.5/https://docsupdatetracker.net/index.html)|[2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux, Docker](/dotnet.microsoft.com/download)|
+|[Jav?pivots=programming-language-java#set-up) |[Maven](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar) | [Azure SDK for Java](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.6/https://docsupdatetracker.net/index.html)|[2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer)|[Windows, macOS, Linux](/java/openjdk/install)|
+|[JavaScript/4.0.0-beta.6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-javascript#set-up)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.6/https://docsupdatetracker.net/index.html) | [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) |
+|[Python/3.2.0b6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-python#set-up) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0b6/https://docsupdatetracker.net/index.html)| [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
+
+## How to use the Form Recognizer SDK in your applications
+
+The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.FormRecognizer --version 4.0.0-beta.5
+```
+
+```powershell
+Install-Package Azure.AI.FormRecognizer -Version 4.0.0-beta.5
+```
+
+### [Java](#tab/java)
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-ai-formrecognizer</artifactId>
+ <version>4.0.0-beta.5</version>
+ </dependency>
+```
+
+```kotlin
+implementation("com.azure:azure-ai-formrecognizer:4.0.0-beta.5")
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+npm i @azure/ai-form-recognizer@4.0.0-beta.6
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-formrecognizer==3.2.0b6
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+```
+
+### [Java](#tab/java)
+
+```java
+import com.azure.ai.formrecognizer.*;
+import com.azure.ai.formrecognizer.models.*;
+import com.azure.ai.formrecognizer.DocumentAnalysisClient.*;
+
+import com.azure.core.credential.AzureKeyCredential;
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Set up authentication
+
+There are two supported methods for authentication
+
+* Use a [Form Recognizer API key](#use-your-api-key) with AzureKeyCredential from azure.core.credentials.
+
+* Use a [token credential from azure-identity](#use-an-azure-active-directory-azure-ad-token-credential) to authenticate with [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis).
+
+#### Use your API key
+
+Here's where to find your Form Recognizer API key in the Azure portal:
++
+### [C#/.NET](#tab/csharp)
+
+```csharp
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string key = "<your-key>";
+string endpoint = "<your-endpoint>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+```
+
+### [Java](#tab/java)
+
+```java
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential("<your-key>"))
+ .endpoint("<your-endpoint>")
+ .buildClient();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+
+// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+async function main() {
+ const client = new DocumentAnalysisClient("<your-endpoint>", new AzureKeyCredential("<your-key>"));
+```
+
+### [Python](#tab/python)
+
+```python
+
+# create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ document_analysis_client = DocumentAnalysisClient(endpoint="<your-endpoint>", credential=AzureKeyCredential("<your-key>"))
+```
+++
+#### Use an Azure Active Directory (Azure AD) token credential
+
+> [!NOTE]
+> Regional endpoints do not support AAD authentication. Create a [custom subdomain](/azure/cognitive-services/authentication?tabs=powershell#create-a-resource-with-a-custom-subdomain) for your resource in order to use this type of authentication.
+
+Authorization is easiest using the `DefaultAzureCredential`. It provides a default token credential, based upon the running environment, capable of handling most Azure authentication scenarios.
+
+### [C#/.NET](#tab/csharp)
+
+Here's how to acquire and use the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet&preserve-view=true) for .NET applications:
+
+1. Install the [Azure Identity library for .NET](/dotnet/api/overview/azure/identity-readme):
+
+ ```console
+ dotnet add package Azure.Identity
+ ```
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret in the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```csharp
+ string endpoint = "<your-endpoint>";
+ var client = new DocumentAnalysisClient(new Uri(endpoint), new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer#authenticate-the-client)
+
+### [Java](#tab/java)
+
+Here's how to acquire and use the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential?view=azure-java-stable&preserve-view=true) for Java applications:
+
+1. Install the [Azure Identity library for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true):
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.5.3</version>
+ </dependency>
+ ```
+
+1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance and **`TokenCredential`** variable:
+
+ ```java
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
+ .endpoint("{your-endpoint}")
+ .credential(credential)
+ .buildClient();
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+
+### [JavaScript](#tab/javascript)
+
+Here's how to acquire and use the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential?view=azure-node-latest&preserve-view=true) for JavaScript applications:
+
+1. Install the [Azure Identity library for JavaScript](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true):
+
+ ```javascript
+ npm install @azure/identity
+ ```
+
+1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```javascript
+ const { DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
+ const { DefaultAzureCredential } = require("@azure/identity");
+
+ const client = new DocumentAnalysisClient("<your-endpoint>", new DefaultAzureCredential());
+ ```
+
+For more information, *see* [Create and authenticate a client](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/formrecognizer/ai-form-recognizer#create-and-authenticate-a-client).
+
+### [Python](#tab/python)
+
+Here's how to acquire and use the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true) for Python applications.
+
+1. Install the [Azure Identity library for Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true):
+
+ ```python
+ pip install azure-identity
+ ```
+1. [Register an Azure AD application and create a new service principal](/azure/cognitive-services/authentication?tabs=powershell#assign-a-role-to-a-service-principal).
+
+1. Grant access to Form Recognizer by assigning the **`Cognitive Services User`** role to your service principal.
+
+1. Set the values of the client ID, tenant ID, and client secret of the Azure AD application as environment variables: **`AZURE_CLIENT_ID`**, **`AZURE_TENANT_ID`**, and **`AZURE_CLIENT_SECRET`**, respectively.
+
+1. Create your **`DocumentAnalysisClient`** instance including the **`DefaultAzureCredential`**:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ from azure.ai.formrecognizer import DocumentAnalysisClient
+
+ credential = DefaultAzureCredential()
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint="https://<my-custom-subdomain>.cognitiveservices.azure.com/",
+ credential=credential
+ )
+ ```
+
+For more information, *see* [Authenticate the client](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer#authenticate-the-client)
+++
+### 4. Build your application
+
+First, you'll create a client object to interact with the Form Recognizer SDK, and then call methods on that client object to interact with the service. The SDKs provide both synchronous and asynchronous methods. For more insight, try a [quickstart](quickstarts/get-started-v3-sdk-rest-api.md) in a language of your choice.
+
+## Help options
+
+The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-form-recognizer) forums are available for the developer community to ask and answer questions about Azure Form Recognizer and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-form-recognizer`**.
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Try a Form Recognizer quickstart**](quickstarts/get-started-v3-sdk-rest-api.md)
+
+> [!div class="nextstepaction"]
+> [**Explore the Form Recognizer REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Previously updated : 06/06/2022 Last updated : 08/22/2022
This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
-For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [Form Recognizer REST API](quickstarts/try-v3-rest-api.md), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/).
+For the usage with [Form Recognizer SDK](quickstarts/get-started-v3-sdk-rest-api.md), [Form Recognizer REST API](quickstarts/get-started-v3-sdk-rest-api.md), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/).
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [For
| **Max number of Neural models** | 100 | 500 | | Adjustable | No | No |
-# [Form Recognizer v3.0 (Preview)](#tab/v30)
+# [Form Recognizer v3.0 ](#tab/v30)
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [For
<sup>3</sup> Open a support request to increase the monthly training limit.
-# [Form Recognizer v2.1 (GA)](#tab/v21)
+# [Form Recognizer v2.1 ](#tab/v21)
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
Generally, it's highly recommended to test the workload and the workload pattern
## Next steps > [!div class="nextstepaction"]
-> [Learn about error codes and troubleshooting](preview-error-guide.md)
+> [Learn about error codes and troubleshooting](v3-error-guide.md)
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
>[!TIP] >
-> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio).
> * The v3.0 Studio supports any model trained with v2.1 labeled data. > * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+> * *See* our [**REST API**](quickstarts/get-started-v3-sdk-rest-api.md) or [**C#**](quickstarts/get-started-v3-sdk-rest-api.md), [**Java**](quickstarts/get-started-v3-sdk-rest-api.md), [**JavaScript**](quickstarts/get-started-v3-sdk-rest-api.md), or [Python](quickstarts/get-started-v3-sdk-rest-api.md) SDK quickstarts to get started with version v3.0.
In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Previously updated : 01/11/2022 Last updated : 08/22/2022 recommendations: false #Customer intent: As a form-processing software developer, I want to learn how to use the Form Recognizer service with Logic Apps.
recommendations: false
> [!IMPORTANT] >
-> This tutorial and the Logic App Form Recognizer connector targets Form Recognizer REST API v2.1.
+> This tutorial and the Logic App Form Recognizer connector targets Form Recognizer REST API v2.1 and must be used in conjuction with the [FOTT sample labeling tool](https://fott-2-1.azurewebsites.net/).
Azure Logic Apps is a cloud-based platform that can be used to automate workflows without writing a single line of code. The platform enables you to easily integrate Microsoft and third-party applications with your apps, data, services, and systems. A Logic App is the Azure resource you create when you want to develop a workflow. Here are a few examples of what you can do with a Logic App:
For more information, *see* [Logic Apps Overview](../../logic-apps/logic-apps-ov
## Prerequisites
-To complete this tutorial, You'll need the following resources:
+To complete this tutorial, you'll need the following resources:
* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
To complete this tutorial, You'll need the following resources:
1. After the resource deploys, select **Go to resource**.
- 1. Copy the **Keys and Endpoint** values from the resource you created and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+ 1. Copy the **Keys and Endpoint** values from your resource in the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL."::: > [!TIP]
- > For further guidance, *see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
+ > For more information, *see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
* A free [**OneDrive**](https://onedrive.live.com/signup) or [**OneDrive for Business**](https://www.microsoft.com/microsoft-365/onedrive/onedrive-for-business) cloud storage account.
At this point, you should have a Form Recognizer resource and a OneDrive folder
1. A short validation check should run. After it completes successfully, select **Create** in the bottom-left corner.
-1. You will be redirected to a screen that says **Deployment in progress**. Give Azure some time to deploy; it can take a few minutes. After the deployment is complete, you should see a banner that says, **Your deployment is complete**. When you reach this screen, select **Go to resource**.
+1. You'll be redirected to a screen that says **Deployment in progress**. Give Azure some time to deploy; it can take a few minutes. After the deployment is complete, you should see a banner that says, **Your deployment is complete**. When you reach this screen, select **Go to resource**.
:::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-seven.gif" alt-text="GIF showing how to get to newly created Logic App resource.":::
-1. You'll be redirected to the **Logic Apps Designer** page. There is a short video for a quick introduction to Logic Apps available on the home screen. When you're ready to begin designing your Logic App, select the **Blank Logic App** button.
+1. You'll be redirected to the **Logic Apps Designer** page. There's a short video for a quick introduction to Logic Apps available on the home screen. When you're ready to begin designing your Logic App, select the **Blank Logic App** button.
:::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-eight.png" alt-text="Image showing how to enter the Logic App Designer.":::
At this point, you should have a Form Recognizer resource and a OneDrive folder
## Create automation flow
-Now that you have the Logic App connector resource set up and configured, the only thing left to do is to create the automation flow and test it out!
+Now that you have the Logic App connector resource set up and configured, the only thing left is to create the automation flow and test it out!
1. Search for and select **OneDrive** or **OneDrive for Business** in the search bar.
Now that you have the Logic App connector resource set up and configured, the on
1. Next, we're going to add a new step to the workflow. Select the plus button underneath the newly created OneDrive node.
-1. A new node should be added to the Logic App designer view. Search for "Form Recognizer" in the search bar and select **Analyze invoice (preview)** from the list.
+1. A new node should be added to the Logic App designer view. Search for "Form Recognizer" in the search bar and select **Analyze invoice** from the list.
-1. Now, you should see a window where you will create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio:
+1. Now, you should see a window where you'll create your connection. Specifically, you're going to connect your Form Recognizer resource to the Logic Apps Designer Studio:
* Enter a **Connection name**. It should be something easy to remember. * Enter the Form Recognizer resource **Endpoint URL** and **Account Key** that you copied previously. If you skipped this step earlier or lost the strings, you can navigate back to your Form Recognizer resource and copy them again. When you're done, select **Create**.
applied-ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-error-guide.md
+
+ Title: "Reference: Form Recognizer Errors"
+
+description: Learn how errors are represented in Form Recognizer and find a list of possible errors returned by the service.
+++++ Last updated : 08/22/2022+++
+# Form Recognizer error guide v3.0
+
+Form Recognizer uses a unified design to represent all errors encountered in the REST APIs. Whenever an API operation returns a 4xx or 5xx status code, additional information about the error is returned in the response JSON body as follows:
+
+```json
+{
+ "error": {
+ "code": "InvalidRequest",
+ "message": "Invalid request.",
+ "innererror": {
+ "code": "InvalidContent",
+ "message": "The file format is unsupported or corrupted. Refer to documentation for the list of supported formats."
+ }
+ }
+}
+```
+
+For long-running operations where multiple errors may be encountered, the top-level error code is set to the most severe error, with the individual errors listed under the *error.details* property. In such scenarios, the *target* property of each individual error specifies the trigger of the error.
+
+```json
+{
+ "status": "failed",
+ "createdDateTime": "2021-07-14T10:17:51Z",
+ "lastUpdatedDateTime": "2021-07-14T10:17:51Z",
+ "error": {
+ "code": "InternalServerError",
+ "message": "An unexpected error occurred.",
+ "details": [
+ {
+ "code": "InternalServerError",
+ "message": "An unexpected error occurred."
+ },
+ {
+ "code": "InvalidContentDimensions",
+ "message": "The input image dimensions are out of range. Refer to documentation for supported image dimensions.",
+ "target": "2"
+ }
+ ]
+ }
+}
+```
+
+The top-level *error.code* property can be one of the following error code messages:
+
+| Error Code | Message | Http Status |
+| -- | | -- |
+| InvalidRequest | Invalid request. | 400 |
+| InvalidArgument | Invalid argument. | 400 |
+| Forbidden | Access forbidden due to policy or other configuration. | 403 |
+| NotFound | Resource not found. | 404 |
+| MethodNotAllowed | The requested HTTP method is not allowed. | 405 |
+| Conflict | The request could not be completed due to a conflict. | 409 |
+| UnsupportedMediaType | Request content type is not supported. | 415 |
+| InternalServerError | An unexpected error occurred. | 500 |
+| ServiceUnavailable | A transient error has occurred. Try again. | 503 |
+
+When possible, more details are specified in the *inner error* property.
+
+| Top Error Code | Inner Error Code | Message |
+| -- | - | - |
+| Conflict | ModelExists | A model with the provided name already exists. |
+| Forbidden | AuthorizationFailed | Authorization failed: {details} |
+| Forbidden | InvalidDataProtectionKey | Data protection key is invalid: {details} |
+| Forbidden | OutboundAccessForbidden | The request contains a domain name that is not allowed by the current access control policy. |
+| InternalServerError | Unknown | Unknown error. |
+| InvalidArgument | InvalidContentSourceFormat | Invalid content source: {details} |
+| InvalidArgument | InvalidParameter | The parameter {parameterName} is invalid: {details} |
+| InvalidArgument | InvalidParameterLength | Parameter {parameterName} length must not exceed {maxChars} characters. |
+| InvalidArgument | InvalidSasToken | The shared access signature (SAS) is invalid: {details} |
+| InvalidArgument | ParameterMissing | The parameter {parameterName} is required. |
+| InvalidRequest | ContentSourceNotAccessible | Content is not accessible: {details} |
+| InvalidRequest | ContentSourceTimeout | Timeout while receiving the file from client. |
+| InvalidRequest | DocumentModelLimit | Account cannot create more than {maximumModels} models. |
+| InvalidRequest | DocumentModelLimitNeural | Account cannot create more than 10 custom neural models per month. Please contact support to request additional capacity. |
+| InvalidRequest | DocumentModelLimitComposed | Account cannot create a model with more than {details} component models. |
+| InvalidRequest | InvalidContent | The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats. |
+| InvalidRequest | InvalidContentDimensions | The input image dimensions are out of range. Refer to documentation for supported image dimensions. |
+| InvalidRequest | InvalidContentLength | The input image is too large. Refer to documentation for the maximum file size. |
+| InvalidRequest | InvalidFieldsDefinition | Invalid fields: {details} |
+| InvalidRequest | InvalidTrainingContentLength | Training content contains {bytes} bytes. Training is limited to {maxBytes} bytes. |
+| InvalidRequest | InvalidTrainingContentPageCount | Training content contains {pages} pages. Training is limited to {pages} pages. |
+| InvalidRequest | ModelAnalyzeError | Could not analyze using a custom model: {details} |
+| InvalidRequest | ModelBuildError | Could not build the model: {details} |
+| InvalidRequest | ModelComposeError | Could not compose the model: {details} |
+| InvalidRequest | ModelNotReady | Model is not ready for the requested operation. Wait for training to complete or check for operation errors. |
+| InvalidRequest | ModelReadOnly | The requested model is read-only. |
+| InvalidRequest | NotSupportedApiVersion | The requested operation requires {minimumApiVersion} or later. |
+| InvalidRequest | OperationNotCancellable | The operation can no longer be canceled. |
+| InvalidRequest | TrainingContentMissing | Training data is missing: {details} |
+| InvalidRequest | UnsupportedContent | Content is not supported: {details} |
+| NotFound | ModelNotFound | The requested model was not found. It may have been deleted or is still building. |
+| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Previously updated : 07/20/2022 Last updated : 08/22/2022 recommendations: false
-# Form Recognizer v3.0 migration | Preview
+# Form Recognizer v3.0 migration
> [!IMPORTANT] > > Form Recognizer REST API v3.0 introduces breaking changes in the REST API request and analyze response JSON.
-Form Recognizer v3.0 (preview) introduces several new features and capabilities:
+Form Recognizer v3.0 introduces several new features and capabilities:
-* [Form Recognizer REST API](quickstarts/try-v3-rest-api.md) has been redesigned for better usability.
+* [Form Recognizer REST API](quickstarts/get-started-v3-sdk-rest-api.md) has been redesigned for better usability.
* [**General document (v3.0)**](concept-general-document.md) model is a new API that extracts text, tables, structure, and key-value pairs, from forms and documents.
-* [**Custom document model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents.
+* [**Custom neural model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents.
* [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses. * [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
In this article, you'll learn the differences between Form Recognizer v2.1 and v
> [!CAUTION] >
-> * REST API **2022-06-30-preview** release includes a breaking change in the REST API analyze response JSON.
+> * REST API **2022-08-31** release includes a breaking change in the REST API analyze response JSON.
> * The `boundingBox` property is renamed to `polygon` in each instance. ## Changes to the REST API endpoints
In this article, you'll learn the differences between Form Recognizer v2.1 and v
### POST request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
``` ### GET request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-06-30
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-08-31
``` ### Analyze operation
https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/
* The request payload and call pattern remain unchanged. * The Analyze operation specifies the input document and content-specific configurations, it returns the analyze result URL via the Operation-Location header in the response. * Poll this Analyze Result URL, via a GET request to check the status of the analyze operation (minimum recommended interval between requests is 1 second).
-* Upon success, status is set to succeeded and [analyzeResult](#changes-to-analyze-result) is returned in the response body. If errors are encountered, status will be set to failed and an error will be returned.
+* Upon success, status is set to succeeded and [analyzeResult](#changes-to-analyze-result) is returned in the response body. If errors are encountered, status will be set to `failed`, and an error will be returned.
| Model | v2.1 | v3.0 | |:--| :--| :--| | **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
-|🆕 **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
+| **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`| |**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` | | **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
Base64 encoding is also supported in Form Recognizer v3.0:
} ```
-### Additional parameters
+### Additional supported parameters
Parameters that continue to be supported:
Analyze response has been refactored to the following top-level results to suppo
{ // Basic analyze result metadata
-"apiVersion": "2022-06-30", // REST API version used
+"apiVersion": "2022-08-31", // REST API version used
"modelId": "prebuilt-invoice", // ModelId used "stringIndexType": "textElements", // Character unit used for string offsets and lengths: // textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
Analyze response has been refactored to the following top-level results to suppo
"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table { "pageNumber": 1, // 1-indexed page number
-"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-06-30-preview API
+"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-08-31 API
} ], "spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
The model object has three updates in the new API
* ```modelId``` is now a property that can be set on a model for a human readable name. * ```modelName``` has been renamed to ```description```
-* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom document models.
+* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom neural models.
-The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed``` and the error is returned.
+The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed```, and the error is returned.
The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
{ "modelId": {modelId},
POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build
Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-06-30
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-08-31
{ "modelId": "{composedModelId}", "description": "{composedModelDescription}",
The only changes to the copy model function are:
***Authorize the copy*** ```json
-POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-06-30
+POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
{ "modelId": "{targetModelId}", "description": "{targetModelDescription}",
POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-versio
Use the response body from the authorize action to construct the request for the copy. ```json
-POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-06-30
+POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-08-31
{ "targetResourceId": "{targetResourceId}", "targetResourceRegion": "{targetResourceRegion}",
List models have been extended to now return prebuilt and custom models. All pre
***Sample list models request*** ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-06-30
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-08-31
``` ## Change to get model
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-ve
As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type is described by its name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
``` ## New get info operation
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{model
The ```info``` operation on the service returns the custom model count and custom model limit. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-06-30
+GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-08-31
``` ***Sample response***
GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=202
## Next steps
-In this migration guide, you've learned how to upgrade your existing Form Recognizer application to use the v3.0 APIs. Continue to use the 2.1 API for all GA features and use the 3.0 API for any of the preview features.
+In this migration guide, you've learned how to upgrade your existing Form Recognizer application to use the v3.0 APIs.
-* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)
* [What is Form Recognizer?](overview.md) * [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 06/29/2022 Last updated : 08/22/2022 <!-- markdownlint-disable MD024 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+## August 2022
+
+### Form Recognizer v3.0 generally available
+
+**Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!**
+
+#### The August release introduces the following performance updates:
+
+##### Form Recognizer Studio updates
+
+* **Next steps**. Under each model page, the Studio now has a next steps section. Users can quickly reference sample code, troubleshooting guidelines, and pricing information.
+
+* **Custom models**. The Studio now includes the ability to reorder labels in custom model projects to improve labeling efficiency.
+
+* **Copy Models** Custom models can be copied across Form Recognizer services from within the Studio. This enables the promotion of a trained model to other environments and regions.
+
+* **Delete documents**. The Studio now supports deleting documents from labeled dataset within custom projects.
+
+##### Form Recognizer service updates
+
+* [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively.
+
+* [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards as well as passport visa information.
+
+* [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE).
+
+* [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract sub-fields for address components like address, city, state, country, and zip code.
+
+* **AI quality improvements**
+
+ * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
+
+ * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells. As well improved paragraph grouping detection and logical identification of headers and titles.
+
+ * [**prebuilt-document**](concept-general-document.md). Improved value and check box detection.
+ ## June 2022 ### [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June Update
-The June release is the latest update to the Form Recognizer Studio. There are considerable UX and accessbility improvements addressed in this update:
+The June release is the latest update to the Form Recognizer Studio. There are considerable user experience and accessibility improvements addressed in this update:
-* 🆕 **Code sample for Javascript and C#**. The Studio code tab now adds Javascript and C# code samples in addition to the existing Python one.
-* 🆕 **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
-* 🆕 **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
+* **Code sample for Javascript and C#**. The Studio code tab now adds JavaScript and C# code samples in addition to the existing Python one.
+* **New document upload UI**. Studio now supports uploading a document with drag & drop into the new upload user interface.
+* **New feature for custom projects**. Custom projects now support creating storage account and blobs when configuring the project. In addition, custom project now supports uploading training files directly within the Studio and copying the existing custom model.
### Form Recognizer v3.0 preview release
-The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities and presents extensive updates across the feature APIs:
+The **2022-06-30-preview** release presents extensive updates across the feature APIs:
-* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
-* [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
-* [🆕 **Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
-* [🆕 **Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs-preview).
-* [🆕 **Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
-* [🆕 **Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
-* [🆕 **Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md#id-document-preview-field-extraction).
-* [🆕 **Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
+* [**Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
+* [**Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [**Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [**Invoice model output now includes general document key-value pairs**](concept-invoice.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs).
+* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+* [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+* [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
+* [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
#### Form Recognizer SDK beta preview release
This new release includes the following updates:
Form Recognizer v3.0 preview release introduces several new features and capabilities and enhances existing one:
-* [🆕 **Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
-* [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
-* [🆕 **Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+* [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
+* [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
+* [**Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models. * [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
-Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-v3-sdk-rest-api.md), or [.NET](quickstarts/get-started-v3-sdk-rest-api.md) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** |**Signatures**|
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**|
| | :: |::| :: | :: |:: |
- |🆕Read | ✓ | | | | | |
- |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ | |
- | Layout | Γ£ô | | Γ£ô | Γ£ô | | |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô || |
- |Receipt | Γ£ô | Γ£ô | | || |
- | ID document | Γ£ô | Γ£ô | | || |
- | Business card | Γ£ô | Γ£ô | | || |
- | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
- | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
+ |Read | Γ£ô | | | | |
+ |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô | |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
+ |Receipt | Γ£ô | Γ£ô | | |Γ£ô|
+ | ID document | Γ£ô | Γ£ô | | ||
+ | Business card | Γ£ô | Γ£ô | | ||
+ | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
#### Form Recognizer SDK beta preview release
The latest beta release version of the Azure Form Recognizer SDKs incorporates n
This new release includes the following updates:
-* 🆕 [Custom Document models and modes](concept-custom.md):
+* [Custom Document models and modes](concept-custom.md):
* [Custom template](concept-custom-template.md) (formerly custom form) * [Custom neural](concept-custom-neural.md). * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
-* 🆕 [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
+* [W-2 prebuilt model](concept-w2.md) (prebuilt-tax.us.w2).
-* 🆕 [Read prebuilt model](concept-read.md) (prebuilt-read).
+* [Read prebuilt model](concept-read.md) (prebuilt-read).
-* 🆕 [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
+* [Invoice prebuilt model (Spanish)](concept-invoice.md#supported-languages-and-locales) (prebuilt-invoice).
### [**C#**](#tab/csharp)
The `BuildModelOperation` and `CopyModelOperation` now correctly populate the `P
* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Form Recognizer Studio to test the different prebuilt models or label and train a custom model
-Get stared with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-v3-sdk-rest-api.md), or [.NET](quickstarts/get-started-v3-sdk-rest-api.md) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** | | | :: |::| :: | :: |:: |
- |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ |
+ |General document | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Layout | Γ£ô | | Γ£ô | Γ£ô | | | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô || |Receipt | Γ£ô | Γ£ô | | ||
The patch addresses invoices that don't have subline item fields detected such a
## May 2021
-### Form Recognizer 2.1 API Generally Available (GA) release
+### Form Recognizer 2.1 API Generally Available release
-* Form Recognizer 2.1 is generally available. The General Availability (GA) release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
+* Form Recognizer 2.1 is generally available. The General Availability release marks the stability of the changes introduced in prior 2.1 preview package versions. This release enables you to detect and extract information and data from the following document types:
* [Documents](concept-layout.md) * [Receipts](./concept-receipt.md)
pip package version 3.1.0b4
### New features
-* **SDK support for Form Recognizer API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Form Recognizer v2.0 (preview) release. Use the links below to get started with your language of choice:
+* **SDK support for Form Recognizer API v2.0 Public Preview** - This month we expanded our service support to include a preview SDK for Form Recognizer v2.0 release. Use the links below to get started with your language of choice:
* [.NET SDK](/dotnet/api/overview/azure/ai.formrecognizer-readme) * [Java SDK](/java/api/overview/azure/ai-formrecognizer-readme) * [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme)
TLS 1.2 is now enforced for all HTTP requests to this service. For more informat
## January 2020
-This release introduces the Form Recognizer 2.0 (preview). In the sections below, you'll find more information about new features, enhancements, and changes.
+This release introduces the Form Recognizer 2.0. In the sections below, you'll find more information about new features, enhancements, and changes.
### New features
Complete a [quickstart](./quickstarts/try-sdk-rest-api.md) to get started writin
## See also
-* [What is Form Recognizer?](./overview.md)
+* [What is Form Recognizer?](./overview.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
-**An Arc-enabled server running as a Hybrid Runbook Worker** already has a built-in System Managed Identity assigned to it which can be used for authentication.
+**An Arc-enabled server or Arc-enabled VMware vSphere VM** running as a Hybrid Runbook Worker already has a built-in System Managed Identity assigned to it which can be used for authentication.
1. You can grant this Managed Identity access to resources in your subscription in the Access control (IAM) blade for the resource by adding the appropriate role assignment.
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Runbooks in Azure Automation might not have access to resources in other clouds or in your on-premises environment because they run on the Azure cloud platform. You can use the Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on the machine hosting the role and against resources in the environment to manage those local resources. Runbooks are stored and managed in Azure Automation and then delivered to one or more assigned machines.
-Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and on non-Azure machines through the Arc-enabled servers Connected Machine agent. Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation.
-
+Azure Automation provides native integration of the Hybrid Runbook Worker role through the Azure virtual machine (VM) extension framework. The Azure VM agent is responsible for management of the extension on Azure VMs on Windows and Linux VMs, and [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) on Non-Azure machines including [Azure Arc-enabled Servers](../azure-arc/servers/overview.md) and [Azure Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/overview.md). Now there are two Hybrid Runbook Workers installation platforms supported by Azure Automation.
+
| Platform | Description | ||| |**Extension-based (V2)** |Installed using the [Hybrid Runbook Worker VM extension](./extension-based-hybrid-runbook-worker-install.md), without any dependency on the Log Analytics agent reporting to an Azure Monitor Log Analytics workspace. **This is the recommended platform**.| |**Agent-based (V1)** |Installed after the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) is completed.| - Here's a list of benefits available with the extension-based Hybrid Runbook Worker role:
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
The extension-based onboarding is only for **User** Hybrid Runbook Workers. This
For **System** Hybrid Runbook Worker onboarding, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](./automation-windows-hrw-install.md) or [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md).
-You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure machine or a non-Azure machine through servers registered with [Azure Arc-enabled servers](../azure-arc/servers/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
+You can use the user Hybrid Runbook Worker feature of Azure Automation to run runbooks directly on an Azure or non-Azure machine, including [Azure Arc-enabled servers](../azure-arc/servers/overview.md) and [Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/overview.md). From the machine or server that's hosting the role, you can run runbooks directly against it and against resources in the environment to manage those local resources.
Azure Automation stores and manages runbooks and then delivers them to one or more chosen machines. After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment.
Azure Automation stores and manages runbooks and then delivers them to one or mo
- Two cores - 4 GB of RAM-- The system-assigned managed identity must be enabled on the Azure virtual machine or Arc-enabled server. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process.-- Non-Azure machines must have the Azure Arc-enabled servers agent (the connected machine agent) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md).
+- The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process.
+- Non-Azure machines must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) for Arc-enabled VMware vSphere VMs.
### Supported operating systems
Azure Automation stores and manages runbooks and then delivers them to one or mo
If you use a proxy server for communication between Azure Automation and machines running the extension-base Hybrid Runbook Worker, ensure that the appropriate resources are accessible. The timeout for requests from the Hybrid Runbook Worker and Automation services is 30 seconds. After three attempts, a request fails. > [!NOTE]
-> You can set up the proxy settings by PowerShell cmdlets or API.
+> For Azure VMs and Arc-enabled Servers, you can set up the proxy settings using PowerShell cmdlets or API. This is currently not supported for Arc-enabled VMware vSphere VMs.
To install the extension using cmdlets:
To create a hybrid worker group in the Azure portal, follow these steps:
- If you select **Default**, the hybrid extension will be installed using the local system account. - If you select **Custom**, then from the drop-down list, select the credential asset.
-1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines or Azure Arc-enabled servers to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
+1. Select **Next** to advance to the **Hybrid workers** tab. You can select Azure virtual machines, Azure Arc-enabled servers or Azure Arc enabled VMware vSphere to be added to this Hybrid worker group. If you don't select any machines, an empty Hybrid worker group will be created. You can still add machines later.
:::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/basics-tab-portal.png" alt-text="Screenshot showing to enter name and credentials in basics tab.":::
You can also add machines to an existing hybrid worker group.
1. Select **Add** to add the machine to the group.
- Once added, you can see the machine type as Azure virtual machine or Arc-enabled server. The **Platform** field shows the worker as **Agent based (V1)** or **Extension based (V2)**.
+ After adding, you can see the machine type as Azure virtual machine, Server-Azure Arc or VMware virtual machine-Azure Arc. The **Platform** field shows the worker as **Agent based (V1)** or **Extension based (V2)**.
- :::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform.png" alt-text="Platform field showing agent or extension based.":::
+ :::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based." lightbox="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-expanded.png":::
## Install Extension-based (V2) on existing Agent-based (V1) Hybrid Worker
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). -- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
+- To learn about VM extensions for Arc-enabled VMware vSphere VMs, see [Manage VMware VMs in Azure through Arc-enabled VMware vSphere](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md).
+
azure-arc Conceptual Connectivity Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md
Title: "Azure Arc-enabled Kubernetes connectivity modes" Previously updated : 11/23/2021 Last updated : 08/22/2022 description: "This article provides an overview of the connectivity modes supported by Azure Arc-enabled Kubernetes" keywords: "Kubernetes, Arc, Azure, containers"
keywords: "Kubernetes, Arc, Azure, containers"
# Azure Arc-enabled Kubernetes connectivity modes
-Azure Arc-enabled Kubernetes requires deployment of Azure Arc agents on your Kubernetes clusters using which capabilities like configurations (GitOps), extensions, Cluster Connect and Custom Location are made available on the cluster. Kubernetes clusters deployed on the edge may not have constant network connectivity and as a result the agents may not be able to always reach the Azure Arc services. This semi-connected mode however is a supported scenario. To support semi-connected modes of deployment, for features like configurations and extensions, agents rely on pulling of desired state specification from the Arc services and later realizing this state on the cluster.
+Azure Arc-enabled Kubernetes requires deployment of Azure Arc agents on your Kubernetes clusters so that capabilities such as configurations (GitOps), extensions, Cluster Connect and Custom Location are made available on the cluster. Kubernetes clusters deployed on the edge may not have constant network connectivity, and as a result, in a semi-connected mode the agents may not always be able to reach the Azure Arc services. This topic explains how Azure Arc features can be used with semi-connected modes of deployment.
## Understand connectivity modes
-| Connectivity mode | Description |
-| -- | -- |
-| Fully connected | Agents can consistently communicate with Azure with little delay in propagating GitOps configurations, enforcing Azure Policy and Gatekeeper policies, and collecting workload metrics and logs in Azure Monitor. |
-| Semi-connected | The managed identity certificate pulled down by the `clusteridentityoperator` is valid for up to 90 days before the certificate expires. Upon expiration, the Azure Arc-enabled Kubernetes resource stops working. To reactivate all Azure Arc features on the cluster, delete, and recreate the Azure Arc-enabled Kubernetes resource and agents. During the 90 days, connect the cluster at least once every 30 days. |
-| Disconnected | Kubernetes clusters in disconnected environments unable to access Azure are currently unsupported by Azure Arc-enabled Kubernetes. If this capability is of interest to you, submit or up-vote an idea on [Azure Arc's UserVoice forum](https://feedback.azure.com/d365community/forum/5c778dec-0625-ec11-b6e6-000d3a4f0858).
+When working with Azure Arc-enabled Kubernetes clusters, it's important to understand how network connectivity modes impact your operations.
+- **Fully connected**: With ongoing network connectivity, agents can consistently communicate with Azure. In this mode, there is typically little delay with tasks such as propagating GitOps configurations, enforcing Azure Policy and Gatekeeper policies, or collecting workload metrics and logs in Azure Monitor.
+- **Semi-connected**: Azure Arc agents can pull desired state specification from the Arc services, then later realize this state on the cluster.
+ > [!IMPORTANT]
+ > The managed identity certificate pulled down by the `clusteridentityoperator` is valid for up to 90 days before it expires. The agents will try to renew the certificate during this time period; however, if there is no network connectivity, the certificate may expire, and the Azure Arc-enabled Kubernetes resource will stop working. Because of this, we recommend ensuring that the connected cluster has network connectivity at least once every 30 days. If the certificate expires, you'll need to delete and then recreate the Azure Arc-enabled Kubernetes resource and agents in order to reactivate Azure Arc features on the cluster.
+- **Disconnected**: Kubernetes clusters in disconnected environments that are unable to access Azure are not currently supported by Azure Arc-enabled Kubernetes.
## Connectivity status
The connectivity status of a cluster is determined by the time of the latest hea
| Status | Description | | | -- |
-| Connecting | Azure Arc-enabled Kubernetes resource is created in Azure Resource Manager, but service hasn't received the agent heartbeat yet. |
-| Connected | Azure Arc-enabled Kubernetes service received an agent heartbeat sometime within the previous 15 minutes. |
-| Offline | Azure Arc-enabled Kubernetes resource was previously connected, but the service hasn't received any agent heartbeat for 15 minutes. |
-| Expired | Managed identity certificate of the cluster has an expiration window of 90 days after it is issued. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring, and policy stop working on this cluster. More information on how to address expired Azure Arc-enabled Kubernetes resources can be found [in the FAQ article](./faq.md#how-do-i-address-expired-azure-arc-enabled-kubernetes-resources). |
+| Connecting | The Azure Arc-enabled Kubernetes resource has been created in Azure, but the service hasn't received the agent heartbeat yet. |
+| Connected | The Azure Arc-enabled Kubernetes service received an agent heartbeat within the previous 15 minutes. |
+| Offline | The Azure Arc-enabled Kubernetes resource was previously connected, but the service hasn't received any agent heartbeat for 15 minutes. |
+| Expired | The managed identity certificate of the cluster has expired. In this state, Azure Arc features will no longer work on the cluster. For more information on how to address expired Azure Arc-enabled Kubernetes resources, see the [FAQ](./faq.md#how-do-i-address-expired-azure-arc-enabled-kubernetes-resources). |
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
-* Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md).
+- Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+- Learn more about creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md).
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md
Title: "Azure Arc-enabled Kubernetes and GitOps frequently asked questions" Previously updated : 04/06/2022 Last updated : 08/22/2022 description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps" keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps, faq"
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, AKS on Azure
## How do I address expired Azure Arc-enabled Kubernetes resources?
-The system assigned managed identity associated with your Azure Arc-enabled Kubernetes cluster is only used by the Azure Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days, and the agents will attempt to renew this certificate between Day 46 to Day 90. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster and you'll then need to delete and connect the cluster to Azure Arc once again. It is thus advisable to have the cluster come online at least once between Day 46 to Day 90 time window to ensure renewal of the managed identity certificate.
+The system-assigned managed identity associated with your Azure Arc-enabled Kubernetes cluster is only used by the Azure Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days, and the agents will attempt to renew this certificate between Day 46 to Day 90. To avoid having your managed identity certificate expire, be sure that the cluster comes online at least once between Day 46 and Day 90 so that the certificate can be renewed.
-To check when the certificate is about to expire for any given cluster, run the following command:
+If the managed identity certificate expires, the resource is considered `Expired` and all Azure Arc features (such as configuration, monitoring, and policy) will stop working on the cluster.
+
+To check when the managed identity certificate will expire for a given cluster, run the following command:
```azurecli az connectedk8s show -n <name> -g <resource-group> ```
-In the output, the value of the `managedIdentityCertificateExpirationTime` indicates when the managed identity certificate will expire (90D mark for that certificate).
+In the output, the value of the `managedIdentityCertificateExpirationTime` indicates when the managed identity certificate will expire (90D mark for that certificate).
If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp from the past, then the `connectivityStatus` field in the above output will be set to `Expired`. In such cases, to get your Kubernetes cluster working with Azure Arc again:
-1. Delete Azure Arc-enabled Kubernetes resource and agents on the cluster.
+1. Delete the Azure Arc-enabled Kubernetes resource and agents on the cluster.
```azurecli az connectedk8s delete -n <name> -g <resource-group>
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
There are two different sets of credentials stored on the Arc resource bridge. B
- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade. - **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
-To update the credentials of the account for Arc resource bridge, use the Azure CLI command [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredential#az-arcappliance-update-infracredentials-vmware). Run the command from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+To update the credentials of the account for Arc resource bridge, use the Azure CLI command [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). Run the command from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
```azurecli az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig>
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
To create a private endpoint, follow these steps.
1. In the **Resource** tab, select your subscription, choose the resource type as `Microsoft.Cache/Redis`, and then select the cache you want to connect the private endpoint to. 1. Select the **Next: Configuration** button at the bottom of the page.-
+1. Select the **Next: Virtual Network** button at the bottom of the page.
1. In the **Configuration** tab, select the virtual network and subnet you created in the previous section.-
+1. In the **Virtual Network** tab, select the virtual network and subnet you created in the previous section.
1. Select the **Next: Tags** button at the bottom of the page. 1. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource.
It's only linked to your VNet. Because it's not in your VNet, NSG rules don't ne
## Next steps - To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md).-- To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
+- To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md).
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Title: Connect Azure Functions to Azure Cosmos DB using Visual Studio Code description: Learn how to connect Azure Functions to an Azure Cosmos DB account by adding an output binding to your Visual Studio Code project.- Last updated 08/17/2021 - zone_pivot_groups: programming-languages-set-functions-temp ms.devlang: csharp, javascript
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 7/21/2022 Last updated : 8/22/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | Coming soon |
+| July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0.0 | None |
+| June 2022 | Bugfixes with user assigned identity support, and reliability improvements | 1.6.0.0 | None |
| May 2022 | <ul><li>Fixed issue where agent stops functioning due to faulty XPath query. With this version, only query related Windows events will fail, other data types will continue to be collected</li><li>Collection of Windows network troubleshooting logs added to 'CollectAMAlogs.ps1' tool</li><li>Linux support for Debian 11 distro</li><li>Fixed issue to list mount paths instead of device names for Linux disk metrics</li></ul> | 1.5.0.0 | 1.21.0 | | April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, RHEL 8.5, 8.6, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 | | March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 |
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
This sections shows how to use PowerShell commands create, view and manage class
Get-AzAlertRule -ResourceGroup montest -TargetResourceId /subscriptions/s1/resourceGroups/montest/providers/Microsoft.Compute/virtualMachines/testconfig ```
-8. Classic alert rules can no longer be created via PowerShell. To create an alert rule you need to use the new ['Add-AzMetricAlertRule'](/powershell/module/az.monitor/add-azmetricalertrule) command.
+8. Classic alert rules can no longer be created via PowerShell. Use the new ['Add-AzMetricAlertRuleV2'](/powershell/module/az.monitor/add-azmetricalertrulev2) command to create a metric alert rule instead.
## Next steps
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Metric alert rules include these features:
- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default. The target of the metric alert rule can be:-- A single resource, such as a VM. See this article for supported resource types.
+- A single resource, such as a VM. See [this article](alerts-metric-near-real-time.md) for supported resource types.
- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group. ### Multiple conditions
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
Title: Application Insights SDK support guidance
description: Support guidance for Application Insights legacy and preview SDKs Previously updated : 03/24/2022 Last updated : 08/22/2022
Support engineers are expected to provide SDK update guidance according to the f
|Current SDK version in use |Alternative version available |Update policy for support | ||||
-|Latest GA SDK | No newer supported stable version | **NO UPDATE NECESSARY** |
-|Stable minor version of a GA SDK | Newer supported stable version | **UPDATE RECOMMENDED** |
+|Latest GA SDK | Newer preview version available | **NO UPDATE NECESSARY** |
+|GA SDK | Newer GA released < one year ago | **UPDATE RECOMMENDED** |
+|GA SDK | Newer GA released > one year ago | **UPDATE REQUIRED** |
|Unsupported ([support policy](/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
-|Preview | Stable version | **UPDATE REQUIRED** |
-|Preview | Older stable version | **UPDATE RECOMMENDED** |
-|Preview | Newer preview version, no older stable version | **UPDATE RECOMMENDED** |
+|Latest Preview | No newer version available | **NO UPDATE NECESSARY** |
+|Latest Preview | Newer GA SDK | **UPDATE REQUIRED** |
+|Preview | Newer preview version | **UPDATE REQUIRED** |
+
+> [!NOTE]
+> * General Availability (GA) refers to non-beta versions.
+> * Preview refers to beta versions.
> [!TIP] > Switching to [auto-instrumentation](codeless-overview.md) eliminates the need for manual SDK updates.
azure-monitor Tutorial Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-asp-net-core.md
+
+ Title: Application Insights SDK for ASP.NET Core applications | Microsoft Docs
+description: Application Insights SDK tutorial to monitor ASP.NET Core web applications for availability, performance, and usage.
+
+ms.devlang: csharp
+ Last updated : 08/22/2022+++
+# Enable Application Insights for ASP.NET Core applications
+
+This article describes how to enable Application Insights for an [ASP.NET Core](/aspnet/core) application deployed as an Azure Web App. This implementation utilizes an SDK-based approach, an [auto-instrumentation approach](./codeless-overview.md) is also available.
+
+Application Insights can collect the following telemetry from your ASP.NET Core application:
+
+> [!div class="checklist"]
+> * Requests
+> * Dependencies
+> * Exceptions
+> * Performance counters
+> * Heartbeats
+> * Logs
+
+We'll use an [ASP.NET Core MVC application](/aspnet/core/tutorials/first-mvc-app) example that targets `net6.0`. You can apply these instructions to all ASP.NET Core applications. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
+
+> [!NOTE]
+> A preview [OpenTelemetry-based .NET offering](./opentelemetry-enable.md?tabs=net) is available. [Learn more](./opentelemetry-overview.md).
++
+## Supported scenarios
+
+The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) can monitor your applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. Support covers the following scenarios:
+* **Operating system**: Windows, Linux, or Mac
+* **Hosting method**: In process or out of process
+* **Deployment method**: Framework dependent or self-contained
+* **Web server**: IIS (Internet Information Server) or Kestrel
+* **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on
+* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview
+* **IDE**: Visual Studio, Visual Studio Code, or command line
+
+## Prerequisites
+
+If you'd like to follow along with the guidance in this article, certain pre-requisites are needed.
+
+* Visual Studio 2022
+* Visual Studio Workloads: ASP.NET and web development, Data storage and processing, and Azure development
+* .NET 6.0
+* Azure subscription and user account (with the ability to create and delete resources)
+
+## Deploy Azure resources
+
+Please follow the guidance to deploy the sample application from its [GitHub repository.](https://github.com/solliancenet/appinsights-azurecafe).
+
+In order to provide globally unique names to some resources, a 5 character suffix has been assigned. Please make note of this suffix for use later on in this article.
+
+![The deployed Azure resource listing displays with the 5 character suffix highlighted.](./media/tutorial-asp-net-core/naming-suffix.png "Record the 5 character suffix")
+
+## Create an Application Insights resource
+
+1. In the [Azure portal](https://portal.azure.com), locate and select the **application-insights-azure-cafe** resource group.
+
+2. From the top toolbar menu, select **+ Create**.
+
+ ![The resource group application-insights-azure-cafe displays with the + Create button highlighted on the toolbar menu.](./media/tutorial-asp-net-core/create-resource-menu.png "Create new resource")
+
+3. On the **Create a resource** screen, search for and select `Application Insights` in the marketplace search textbox.
+
+ ![The Create a resource screen displays with Application Insights entered into the search box and Application Insights highlighted from the search results.](./media/tutorial-asp-net-core/search-application-insights.png "Search for Application Insights")
+
+4. On the Application Insights resource overview screen, select **Create**.
+
+ ![The Application Insights overview screen displays with the Create button highlighted.](./media/tutorial-asp-net-core/create-application-insights-overview.png "Create Application Insights resource")
+
+5. On the Application Insights screen **Basics** tab. Complete the form as follows, then select the **Review + create** button. Fields not specified in the table below may retain their default values.
+
+ | Field | Value |
+ |-|-|
+ | Name | Enter `azure-cafe-application-insights-{SUFFIX}`, replacing **{SUFFIX}** with the appropriate suffix value recorded earlier. |
+ | Region | Select the same region chosen when deploying the article resources. |
+ | Log Analytics Workspace | Select `azure-cafe-log-analytics-workspace`, alternatively a new log analytics workspace can be created here. |
+
+ ![The Application Insights Basics tab displays with a form populated with the preceding values.](./media/tutorial-asp-net-core/application-insights-basics-tab.png "Application Insights Basics tab")
+
+6. Once validation has passed, select **Create** to deploy the resource.
+
+ ![The Application Insights validation screen displays indicating Validation passed and the Create button is highlighted.](./media/tutorial-asp-net-core/application-insights-validation-passed.png "Validation passed")
+
+7. Once deployment has completed, return to the `application-insights-azure-cafe` resource group, and select the deployed Application Insights resource.
+
+ ![The Azure Cafe resource group displays with the Application Insights resource highlighted.](./media/tutorial-asp-net-core/application-insights-resource-group.png "Application Insights")
+
+8. On the Overview screen of the Application Insights resource, copy the **Connection String** value for use in the next section of this article.
+
+ ![The Application Insights Overview screen displays with the Connection String value highlighted and the Copy button selected.](./media/tutorial-asp-net-core/application-insights-connection-string-overview.png "Copy Connection String value")
+
+## Configure the Application Insights connection string application setting in the web App Service
+
+1. Return to the `application-insights-azure-cafe` resource group, locate and open the **azure-cafe-web-{SUFFIX}** App Service resource.
+
+ ![The Azure Cafe resource group displays with the azure-cafe-web-{SUFFIX} resource highlighted.](./media/tutorial-asp-net-core/web-app-service-resource-group.png "Web App Service")
+
+2. From the left menu, beneath the Settings header, select **Configuration**. Then, on the **Application settings** tab, select **+ New application setting** beneath the Application settings header.
+
+ ![The App Service resource screen displays with the Configuration item selected from the left menu and the + New application setting toolbar button highlighted.](./media/tutorial-asp-net-core/app-service-app-setting-button.png "Create New application setting")
+
+3. In the Add/Edit application setting blade, complete the form as follows and select **OK**.
+
+ | Field | Value |
+ |-|-|
+ | Name | APPLICATIONINSIGHTS_CONNECTION_STRING |
+ | Value | Paste the Application Insights connection string obtained in the preceding section. |
+
+ ![The Add/Edit application setting blade displays populated with the preceding values.](./media/tutorial-asp-net-core/add-edit-app-setting.png "Add/Edit application setting")
+
+4. On the App Service Configuration screen, select the **Save** button from the toolbar menu. When prompted to save the changes, select **Continue**.
+
+ ![The App Service Configuration screen displays with the Save button highlighted on the toolbar menu.](./media/tutorial-asp-net-core/save-app-service-configuration.png "Save the App Service Configuration")
+
+## Install the Application Insights NuGet Package
+
+We need to configure the ASP.NET Core MVC web application to send telemetry. This is accomplished using the [Application Insights for ASP.NET Core web applications NuGet package](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore).
+
+1. With Visual Studio, open `1 - Starter Application\src\AzureCafe.sln`.
+
+2. In the Solution Explorer panel, right-click the AzureCafe project file, and select **Manage NuGet Packages**.
+
+ ![The Solution Explorer displays with Manage NuGet Packages selected from the context menu.](./media/tutorial-asp-net-core/manage-nuget-packages-menu.png "Manage NuGet Packages")
+
+3. Select the **Browse** tab, then search for and select **Microsoft.ApplicationInsights.AspNetCore**. Select **Install**, and accept the license terms. It is recommended to use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
+
+ ![The NuGet tab displays with the Browse tab selected and Microsoft.ApplicationInsights.AspNetCore is entered in the search box. The Microsoft.ApplicationInsights.AspNetCore package is selected from a list of results. In the right pane, the latest stable version is selected from a drop down list and the Install button is highlighted.](./media/tutorial-asp-net-core/asp-net-core-install-nuget-package.png "Install NuGet Package")
+
+4. Keep Visual Studio open for the next section of the article.
+
+## Enable Application Insights server-side telemetry
+
+The Application Insights for ASP.NET Core web applications NuGet package encapsulates features to enable sending server-side telemetry to the Application Insights resource in Azure.
+
+1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file.
+
+ ![The Visual Studio Solution Explorer displays with the Program.cs highlighted.](./media/tutorial-asp-net-core/solution-explorer-programcs.png "Program.cs")
+
+2. Insert the following code prior to the `builder.Services.AddControllersWithViews()` statement. This code automatically reads the Application Insights connection string value from configuration. The `AddApplicationInsightsTelemetry` method registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container, that will then be used to fulfill [ILogger](/dotnet/api/microsoft.extensions.logging.ilogger) and [ILogger\<TCategoryName\>](/dotnet/api/microsoft.extensions.logging.iloggerprovider) implementation requests.
+
+ ```csharp
+ builder.Services.AddApplicationInsightsTelemetry();
+ ```
+
+ ![A code window displays with the preceding code snippet highlighted.](./media/tutorial-asp-net-core/enable-server-side-telemetry.png "Enable server-side telemetry")
+
+ > [!TIP]
+ > Learn more about [configuration options in ASP.NET Core](/aspnet/core/fundamentals/configuration).
+
+## Enable client-side telemetry for web applications
+
+The preceding steps are enough to help you start collecting server-side telemetry. This application has client-side components, follow the next steps to start collecting [usage telemetry](./usage-overview.md).
+
+1. In Visual Studio Solution explorer, locate and open `\Views\_ViewImports.cshtml`. Add the following code at the end of the existing file.
+
+ ```cshtml
+ @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
+ ```
+
+ ![The _ViewImports.cshtml file displays with the preceding line of code highlighted.](./media/tutorial-asp-net-core/view-imports-injection.png "JavaScriptSnippet injection")
+
+2. To properly enable client-side monitoring for your application, the JavaScript snippet must appear in the `<head>` section of each page of your application that you want to monitor. In Visual Studio Solution Explorer, locate and open `\Views\Shared\_Layout.cshtml`, insert the following code immediately preceding the closing `<\head>` tag.
+
+ ```cshtml
+ @Html.Raw(JavaScriptSnippet.FullScript)
+ ```
+
+ ![The _Layout.cshtml file displays with the preceding line of code highlighted within the head section of the page.](./media/tutorial-asp-net-core/layout-head-code.png "The head section of _Layout.cshtml")
+
+ > [!TIP]
+ > As an alternative to using the `FullScript`, the `ScriptBody` is available. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
+
+ ```cshtml
+ <script> // apply custom changes to this script tag.
+ @Html.Raw(JavaScriptSnippet.ScriptBody)
+ </script>
+ ```
+
+> [!NOTE]
+> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk).
+
+## Enable monitoring of database queries
+
+When investigating causes for performance degradation, it is important to include insights into database calls. Enable monitoring through configuration of the [dependency module](./asp-net-dependencies.md). Dependency monitoring, including SQL is enabled by default. The following steps can be followed to capture the full SQL query text.
+
+> [!NOTE]
+> SQL text may contain sensitive data such as passwords and PII. Be careful when enabling this feature.
+
+1. From the Visual Studio Solution Explorer, locate and open the **Program.cs** file.
+
+2. At the top of the file, add the following `using` statement.
+
+ ```csharp
+ using Microsoft.ApplicationInsights.DependencyCollector;
+ ```
+
+3. Immediately following the `builder.Services.AddApplicationInsightsTelemetry()` code, insert the following to enable SQL command text instrumentation.
+
+ ```csharp
+ builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module.EnableSqlCommandTextInstrumentation = true; });
+ ```
+
+ ![A code window displays with the preceding code highlighted.](./media/tutorial-asp-net-core/enable-sql-command-text-instrumentation.png "Enable SQL command text instrumentation")
+
+## Run the Azure Cafe web application
+
+After the web application code is deployed, telemetry will flow to Application Insights. The Application Insights SDK automatically collects incoming web requests to your application.
+
+1. Right-click the **AzureCafe** project in Solution Explorer and select **Publish** from the context menu.
+
+ ![The Visual Studio Solution Explorer displays with the Azure Cafe project selected and the Publish context menu item highlighted.](./media/tutorial-asp-net-core/web-project-publish-context-menu.png "Publish Web App")
+
+2. Select **Publish** to promote the new code to the Azure App Service.
+
+ ![The AzureCafe publish profile displays with the Publish button highlighted.](./media/tutorial-asp-net-core/publish-profile.png "Publish profile")
+
+3. Once the publish has succeeded, a new browser window opens to the Azure Cafe web application.
+
+ ![The Azure Cafe web application displays.](./media/tutorial-asp-net-core/azure-cafe-index.png "Azure Cafe web application")
+
+4. Perform various activities in the web application to generate some telemetry.
+
+ 1. Select **Details** next to a Cafe to view its menu and reviews.
+
+ ![A portion of the Azure Cafe list displays with the Details button highlighted.](./media/tutorial-asp-net-core/cafe-details-button.png "Azure Cafe Details")
+
+ 2. On the Cafe screen, select the **Reviews** tab to view and add reviews. Select the **Add review** button to add a review.
+
+ ![The Cafe details screen displays with the Add review button highlighted.](./media/tutorial-asp-net-core/cafe-add-review-button.png "Add review")
+
+ 3. On the Create a review dialog, enter a name, rating, comments, and upload a photo for the review. Once completed, select **Add review**.
+
+ ![The Create a review dialog displays.](./media/tutorial-asp-net-core/create-a-review-dialog.png "Create a review")
+
+ 4. Repeat adding reviews as desired to generate additional telemetry.
+
+### Live metrics
+
+[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. It might take a few minutes for telemetry to appear in the portal and analytics, but Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces.
+
+### Application map
+
+The sample application makes calls to multiple Azure resources, including Azure SQL, Azure Blob Storage, and the Azure Language Service (for review sentiment analysis).
+
+![The Azure Cafe sample application architecture displays.](./media/tutorial-asp-net-core/azure-cafe-app-insights.png "Azure Cafe sample application architecture")
+
+Application Insights introspects incoming telemetry data and is able to generate a visual map of detected system integrations.
+
+1. Access and log into the [Azure portal](https://portal.azure.com).
+
+2. Open the sample application resource group `application-insights-azure-cafe`.
+
+3. From the list of resources, select the `azure-cafe-insights-{SUFFIX}` Application Insights resource.
+
+4. Select **Application map** from the left menu, beneath the **Investigate** heading. Observe the generated Application map.
+
+ ![The Application Insights application map displays.](./media/tutorial-asp-net-core/application-map.png "Application map")
+
+### Viewing HTTP calls and database SQL command text
+
+1. In the Azure portal, open the Application Insights resource.
+
+2. Beneath the **Investigate** header on the left menu, select **Performance**.
+
+3. The **Operations** tab contains details of the HTTP calls received by the application. You can also toggle between Server and Browser (client-side) views of data.
+
+ ![The Performance screen of Application Insights displays with the toggle between Server and Browser highlighted along with the list of HTTP calls received by the application.](./media/tutorial-asp-net-core/server-performance.png "Server performance HTTP calls")
+
+4. Select an Operation from the table, and choose to drill into a sample of the request.
+
+ ![The Performance screen displays with a POST operation selected, the Drill into samples button is highlighted and a sample is selected from the suggested list.](./media/tutorial-asp-net-core/select-operation-performance.png "Drill into an operation")
+
+5. The End-to-end transaction displays for the selected request. In this case, a review was created including an image, thus it includes calls to Azure Storage, the Language Service (for sentiment analysis), as well as database calls into SQL Azure to persist the review. In this example, the first selected Event displays information relative to the HTTP POST call.
+
+ ![The End-to-end transaction displays with the HTTP Post call selected.](./media/tutorial-asp-net-core/e2e-http-call.png "HTTP POST details")
+
+6. Select a SQL item to review the SQL command text issued to the database.
+
+ ![The End-to-end transaction displays with SQL command details.](./media/tutorial-asp-net-core/e2e-db-call.png "SQL Command text details")
+
+7. Optionally select Dependency (outgoing) requests to Azure Storage or the Language Service.
+
+8. Return to the **Performance** screen, and select the **Dependencies** tab to investigate calls into external resources. Notice the Operations table includes calls into Sentiment Analysis, Blob Storage, and Azure SQL.
+
+ ![The Performance screen displays with the Dependencies tab selected and the Operations table highlighted.](./media/tutorial-asp-net-core/performance-dependencies.png "Dependency Operations")
+
+## Application logging with Application Insights
+
+### Logging overview
+
+Application Insights is one type of [logging provider](/dotnet/core/extensions/logging-providers) available to ASP.NET Core applications that becomes available to applications when the [Application Insights for ASP.NET Core](#install-the-application-insights-nuget-package) NuGet package is installed and [server-side telemetry collection enabled](#enable-application-insights-server-side-telemetry). As a reminder, the following code in **Program.cs** registers the `ApplicationInsightsLoggerProvider` with the built-in dependency injection container.
+
+```csharp
+builder.Services.AddApplicationInsightsTelemetry();
+```
+
+With the `ApplicationInsightsLoggerProvider` registered as the logging provider, the app is ready to log to Application Insights using either constructor injection with <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601>.
+
+> [!NOTE]
+> With default settings, the logging provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater.
+
+Consider the following example controller that demonstrates the injection of ILogger which is resolved with the `ApplicationInsightsLoggerProvider` that is registered with the dependency injection container. Observe in the **Get** method that an Informational, Warning and Error message are recorded.
+
+> [!NOTE]
+> By default, the Information level trace will not be recorded. Only the Warning and above levels are captured.
+
+```csharp
+using Microsoft.AspNetCore.Mvc;
+
+[Route("api/[controller]")]
+[ApiController]
+public class ValuesController : ControllerBase
+{
+ private readonly ILogger _logger;
+
+ public ValuesController(ILogger<ValuesController> logger)
+ {
+ _logger = logger;
+ }
+
+ [HttpGet]
+ public ActionResult<IEnumerable<string>> Get()
+ {
+ //Info level traces are not captured by default
+ _logger.LogInfo("An example of an Info trace..")
+ _logger.LogWarning("An example of a Warning trace..");
+ _logger.LogError("An example of an Error level message");
+
+ return new string[] { "value1", "value2" };
+ }
+}
+```
+
+For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging).
+
+## View logs in Application Insights
+
+The ValuesController above is deployed with the sample application and is located in the **Controllers** folder of the project.
+
+1. Using an internet browser, open the sample application. In the address bar, append `/api/Values` and press <kbd>Enter</kbd>.
+
+ ![A browser window displays with /api/Values appended to the URL in the address bar.](media/tutorial-asp-net-core/values-api-url.png "Values API URL")
+
+2. Wait a few moments, then return to the **Application Insights** resource in the [Azure portal](https://portal.azure.com).
+
+ ![A resource group displays with the Application Insights resource highlighted.](./media/tutorial-asp-net-core/application-insights-resource-group.png "Resource Group")
+
+3. From the left menu of the Application Insights resource, select **Logs** from beneath the **Monitoring** section. In the **Tables** pane, double-click on the **traces** table, located under the **Application Insights** tree. Modify the query to retrieve traces for the **Values** controller as follows, then select **Run** to filter the results.
+
+ ```kql
+ traces
+ | where operation_Name == "GET Values/Get"
+ ```
+
+4. Observe the results display the logging messages present in the controller. A log severity of 2 indicates a warning level, and a log severity of 3 indicates an Error level.
+
+5. Alternatively, the query can also be written to retrieve results based on the category of the log. By default, the category is the fully qualified name of the class where the ILogger is injected, in this case **ValuesController** (if there was a namespace associated with the class the name will be prefixed with the namespace). Re-write and run the following query to retrieve results based on category.
+
+ ```kql
+ traces
+ | where customDimensions.CategoryName == "ValuesController"
+ ```
+
+## Control the level of logs sent to Application Insights
+
+`ILogger` implementations have a built-in mechanism to apply [log filtering](/dotnet/core/extensions/logging#how-filtering-rules-are-applied). This filtering lets you control the logs that are sent to each registered provider, including the Application Insights provider. You can use the filtering either in configuration (using an *appsettings.json* file) or in code. For more information about log levels and guidance on appropriate use, see the [Log Level](/aspnet/core/fundamentals/logging#log-level) documentation.
+
+The following examples show how to apply filter rules to the `ApplicationInsightsLoggerProvider` to control the level of logs sent to Application Insights.
+
+### Create filter rules with configuration
+
+The `ApplicationInsightsLoggerProvider` is aliased as **ApplicationInsights** in configuration. The following section of an *appsettings.json* file sets the default log level for all providers to <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType>. The configuration for the ApplicationInsights provider specifically for categories that start with "ValuesController" override this default value with <xref:Microsoft.Extensions.Logging.LogLevel.Error?displayProperty=nameWithType> and higher.
+
+```json
+{
+ //... additional code removed for brevity
+ "Logging": {
+ "LogLevel": { // No provider, LogLevel applies to all the enabled providers.
+ "Default": "Warning"
+ },
+ "ApplicationInsights": { // Specific to the provider, LogLevel applies to the Application Insights provider.
+ "LogLevel": {
+ "ValuesController": "Error" //Log Level for the "ValuesController" category
+ }
+ }
+ }
+}
+```
+
+Deploying the sample application with the preceding code in *appsettings.json* will yield only the error trace being sent to Application Insights when interacting with the **ValuesController**. This is because the **LogLevel** for the **ValuesController** category is set to **Error**, therefore the **Warning** trace is suppressed.
+
+## Turn off logging to Application Insights
+
+To disable logging using configuration, set all LogLevel values to "None".
+
+```json
+{
+ //... additional code removed for brevity
+ "Logging": {
+ "LogLevel": { // No provider, LogLevel applies to all the enabled providers.
+ "Default": "None"
+ },
+ "ApplicationInsights": { // Specific to the provider, LogLevel applies to the Application Insights provider.
+ "LogLevel": {
+ "ValuesController": "None" //Log Level for the "ValuesController" category
+ }
+ }
+ }
+}
+```
+
+Similarly, within code, set the default level for the `ApplicationInsightsLoggerProvider` and any subsequent log levels to **None**.
+
+```csharp
+var builder = WebApplication.CreateBuilder(args);
+builder.Logging.AddFilter<ApplicationInsightsLoggerProvider>("", LogLevel.None);
+builder.Logging.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("ValuesController", LogLevel.None);
+```
+
+## Open-source SDK
+
+* [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
+
+For the latest updates and bug fixes, see the [release notes](./release-notes.md).
+
+## Next steps
+
+* [Explore user flows](./usage-flows.md) to understand how users navigate through your app.
+* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown.
+* [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage.
+* Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
+* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
+* [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging)
+* [.NET trace logs in Application Insights](./asp-net-trace-logs.md)
+* [Auto-instrumentation for Application Insights](./codeless-overview.md)
+
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-dynamic-scope.md
Title: View multiple resources in the Azure metrics explorer description: Learn how to visualize multiple resources by using the Azure metrics explorer.- Last updated 12/14/2020- # View multiple resources in the Azure metrics explorer
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|ByteCount|Yes|Bytes|Bytes|Total|Total number of Bytes transmitted within time period|Protocol, Direction|
-|DatapathAvailability|Yes|Datapath Availability (Preview)|Count|Average|NAT Gateway Datapath Availability|No Dimensions|
-|PacketCount|Yes|Packets|Count|Total|Total number of Packets transmitted within time period|Protocol, Direction|
-|PacketDropCount|Yes|Dropped Packets|Count|Total|Count of dropped packets|No Dimensions|
-|SNATConnectionCount|Yes|SNAT Connection Count|Count|Total|Total concurrent active connections|Protocol, ConnectionState|
-|TotalConnectionCount|Yes|Total SNAT Connection Count|Count|Total|Total number of active SNAT connections|Protocol|
+|ByteCount|No|Bytes|Bytes|Total|Total number of Bytes transmitted within time period|Protocol, Direction|
+|DatapathAvailability|No|Datapath Availability (Preview)|Count|Average|NAT Gateway Datapath Availability|No Dimensions|
+|PacketCount|No|Packets|Count|Total|Total number of Packets transmitted within time period|Protocol, Direction|
+|PacketDropCount|No|Dropped Packets|Count|Total|Count of dropped packets|No Dimensions|
+|SNATConnectionCount|No|SNAT Connection Count|Count|Total|Total concurrent active connections|Protocol, ConnectionState|
+|TotalConnectionCount|No|Total SNAT Connection Count|Count|Total|Total number of active SNAT connections|Protocol|
## Microsoft.Network/networkInterfaces
azure-monitor Resource Logs Blob Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-blob-format.md
Title: Prepare for format change to Azure Monitor resource logs description: Azure resource logs moved to use append blobs on November 1, 2018.- Last updated 07/06/2018- # Prepare for format change to Azure Monitor platform logs archived to a storage account
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
The method described in this article describes a scheduled export from a log que
## Overview This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs/) which lets you run a log query from a Logic App and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob/) is used in this procedure to send the query output to Azure storage.
-[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png)](media/logs-export-logic-app/logic-app-overview.png#lightbox)
+[![Logic app overview](media/logs-export-logic-app/logic-app-overview.png "Screenshot of Logic app flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
When you export data from a Log Analytics workspace, you should filter and aggregate your log data and optimize query and limit the amount of data processed by your Logic App workflow, to the required data. For example, if you need to archive sign-in events, you should filter for required events and project only the required fields. For example:
Log Analytics workspace and log queries in Azure Monitor are multitenancy servic
1. **Create Logic App**
- 1. Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.<br>
- [![Create Logic App](media/logs-export-logic-app/create-logic-app.png)](media/logs-export-logic-app/create-logic-app.png#lightbox)
+ 1. Go to **Logic Apps** in the Azure portal and click **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new Logic App and then give it a unique name. You can turn on **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-logic-apps-log-analytics.md). This setting isn't required for using the Azure Monitor Logs connector.
+\
+ [![Create Logic App](media/logs-export-logic-app/create-logic-app.png "Screenshot of Logic App resource create.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
- 1. Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
+ 2. Click **Review + create** and then **Create**. When the deployment is complete, click **Go to resource** to open the **Logic Apps Designer**.
-1. **Create a trigger for the Logic App**
+2. **Create a trigger for the Logic App**
- 1. Under **Start with a common trigger**, select **Recurrence**. This creates a Logic App that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.<br>
- [![Recurrence action](media/logs-export-logic-app/recurrence-action.png)](media/logs-export-logic-app/recurrence-action.png#lightbox)
+ 1. Under **Start with a common trigger**, select **Recurrence**. This creates a Logic App that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day** and in the **Interval** box, enter **1** to run the workflow once per day.
+ \
+ [![Recurrence action](media/logs-export-logic-app/recurrence-action.png "Screenshot of recurrence action create.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
-2. **Add Azure Monitor Logs action**
+3. **Add Azure Monitor Logs action**
The Azure Monitor Logs action lets you specify the query to run. The log query used in this example is optimized for hourly recurrence and collects the data ingested for the particular execution time. For example, if the workflow runs at 4:35, the time range would be 3:00 to 4:00. If you change the Logic App to run at a different frequency, you need the change the query as well. For example, if you set the recurrence to run daily, you would set startTime in the query to startofday(make_datetime(year,month,day,0,0)). You will be prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query.
- 1. Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.<br>
- [![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png)](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
+ 1. Click **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, type **azure monitor** and then select **Azure Monitor Logs**.
+ \
+ [![Azure Monitor Logs action](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot of Azure Monitor Logs action create.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
- 2. Click **Azure Log Analytics ΓÇô Run query and list results**.<br>
- [![Screenshot of a new action being added to a step in the Logic App Designer. Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png)](media/logs-export-logic-app/select-query-action-list.png#lightbox)
+ 1. Click **Azure Log Analytics ΓÇô Run query and list results**.
+ \
+ [![Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot of a new action being added to a step in the Logic App Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
- 3. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
+ 2. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select *Log Analytics Workspace* for the **Resource Type** and then select the workspace's name under **Resource Name**.
- 4. Add the following log query to the **Query** window.
+ 3. Add the following log query to the **Query** window.
```Kusto let dt = now();
Log Analytics workspace and log queries in Azure Monitor are multitenancy servic
ResourceId = _ResourceId ```
- 5. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value greater than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range. Select **Last 4 hours** for the **Time Range**. This will ensure that any records with an ingestion time larger than **TimeGenerated** will be included in the results.<br>
- [![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png)](media/logs-export-logic-app/run-query-list-action.png#lightbox)
+ 4. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. This should be set to a value greater than the time range selected in the query. Since this query isn't using the **TimeGenerated** column, then **Set in query** option isn't available. See [Query scope](./scope.md) for more details about the time range. Select **Last 4 hours** for the **Time Range**. This will ensure that any records with an ingestion time larger than **TimeGenerated** will be included in the results.
+ \
+ [![Screenshot of the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "of the settings for the new Azure Monitor Logs action named Run query and visualize results.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
-3. **Add Parse JSON activity (optional)**
+4. **Add Parse JSON activity (optional)**
The output from the **Run query and list results** action is formatted in JSON. You can parse this data and manipulate it as part of the preparation for **Compose** action.
- You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
-
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.<br>
- [![Select Parse JSON activity](media/logs-export-logic-app/select-parse-json.png)](media/logs-export-logic-app/select-parse-json.png#lightbox)
-
- 2. Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.<br>
- [![Select Body](media/logs-export-logic-app/select-body.png)](media/logs-export-logic-app/select-body.png#lightbox)
-
- 3. Click **Use sample payload to generate schema**. Run the log query and copy the output to use for the sample payload. For the sample query here, you can use the following output:
-
- ```json
- {
- "TimeGenerated": "2020-09-29T23:11:02.578Z",
- "BlobTime": "2020-09-29T23:00:00Z",
- "OperationName": "Returns Storage Account SAS Token",
- "OperationNameValue": "MICROSOFT.RESOURCES/DEPLOYMENTS/WRITE",
- "Level": "Informational",
- "ActivityStatus": "Started",
- "ResourceGroup": "monitoring",
- "SubscriptionId": "00000000-0000-0000-0000-000000000000",
- "Category": "Administrative",
- "EventSubmissionTimestamp": "2020-09-29T23:11:02Z",
- "ClientIpAddress": "192.168.1.100",
- "ResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/monitoring/providers/microsoft.storage/storageaccounts/my-storage-account"
- }
- ```
-
- [![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png)](media/logs-export-logic-app/parse-json-payload.png#lightbox)
-
-4. **Add the Compose action**
+ You can provide a JSON schema that describes the payload you expect to receive. The designer parses JSON content by using this schema and generates user-friendly tokens that represent the properties in your JSON content. You can then easily reference and use those properties throughout your Logic App's workflow.
+
+ You can use a sample output from **Run query and list results** step. Click **Run Trigger** in Logic App ribbon, then **Run**, download and save an output record. For the sample query in previous stem, you can use the following sample output:
+
+ ```json
+ {
+ "TimeGenerated": "2020-09-29T23:11:02.578Z",
+ "BlobTime": "2020-09-29T23:00:00Z",
+ "OperationName": "Returns Storage Account SAS Token",
+ "OperationNameValue": "MICROSOFT.RESOURCES/DEPLOYMENTS/WRITE",
+ "Level": "Informational",
+ "ActivityStatus": "Started",
+ "ResourceGroup": "monitoring",
+ "SubscriptionId": "00000000-0000-0000-0000-000000000000",
+ "Category": "Administrative",
+ "EventSubmissionTimestamp": "2020-09-29T23:11:02Z",
+ "ClientIpAddress": "192.168.1.100",
+ "ResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/monitoring/providers/microsoft.storage/storageaccounts/my-storage-account"
+ }
+ ```
+
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **json** and then select **Parse JSON**.
+ \
+ [![Select Parse JSON operator](media/logs-export-logic-app/select-parse-json.png "Screenshot of Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+
+ 1. Click in the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This is the output from the log query.
+ \
+ [![Select Body](media/logs-export-logic-app/select-body.png "Screenshot of Par JSON Content setting with output Body from previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+
+ 1. Copy the sample record saved earlier, click **Use sample payload to generate schema** and paste.
+\
+ [![Parse JSON payload](media/logs-export-logic-app/parse-json-payload.png "Screenshot of Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+
+5. **Add the Compose action**
The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob.
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.<br>
- [![Select Compose action](media/logs-export-logic-app/select-compose.png)](media/logs-export-logic-app/select-compose.png#lightbox)
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **compose** and then select the **Compose** action.
+ \
+ [![Select Compose action](media/logs-export-logic-app/select-compose.png "Screenshot of Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
- 2. Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.<br>
- [![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png)](media/logs-export-logic-app/select-body-compose.png#lightbox)
+ 1. Click the **Inputs** box display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This is the parsed output from the log query.
+ \
+ [![Select body for Compose action](media/logs-export-logic-app/select-body-compose.png "Screenshot of body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
-5. **Add the Create Blob action**
+6. **Add the Create Blob action**
The Create Blob action writes the composed JSON to storage.
- 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.<br>
- [![Select Create blob](media/logs-export-logic-app/select-create-blob.png)](media/logs-export-logic-app/select-create-blob.png#lightbox)
+ 1. Click **+ New step**, and then click **+ Add an action**. Under **Choose an action**, type **blob** and then select the **Create Blob** action.
+ \
+ [![Select Create blob](media/logs-export-logic-app/select-create-blob.png "Screenshot of blob storage action create.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
- 2. Type a name for the connection to your Storage Account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your Storage Account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
+ 1. Type a name for the connection to your Storage Account in **Connection Name** and then click the folder icon in the **Folder path** box to select the container in your Storage Account. Click the **Blob name** to see a list of values from previous activities. Click **Expression** and enter an expression that matches your time interval. For this query which is run hourly, the following expression sets the blob name per previous hour:
```json subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour') ```
+ \
+ [![Blob expression](media/logs-export-logic-app/blob-expression.png "Screenshot of blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
- [![Blob expression](media/logs-export-logic-app/blob-expression.png)](media/logs-export-logic-app/blob-expression.png#lightbox)
-
- 3. Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.<br>
- [![Create blob expression](media/logs-export-logic-app/create-blob.png)](media/logs-export-logic-app/create-blob.png#lightbox)
+ 2. Click the **Blob content** box to display a list of values from previous activities and then select **Outputs** in the **Compose** section.
+ \
+ [![Create blob expression](media/logs-export-logic-app/create-blob.png "Screenshot of blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
-6. **Test the Logic App**
+7. **Test the Logic App**
- Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.<br>
- [![Runs history](media/logs-export-logic-app/runs-history.png)](media/logs-export-logic-app/runs-history.png#lightbox)
+ Test the workflow by clicking **Run**. If the workflow has errors, it will be indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md) if necessary.
+ \
+ [![Runs history](media/logs-export-logic-app/runs-history.png "Screenshot of trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
-7. **View logs in Storage**
+8. **View logs in Storage**
- Go to the **Storage accounts** menu in the Azure portal and select your Storage Account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.<br>
- [![Blob data](media/logs-export-logic-app/blob-data.png)](media/logs-export-logic-app/blob-data.png#lightbox)
+ Go to the **Storage accounts** menu in the Azure portal and select your Storage Account. Click the **Blobs** tile and select the container you specified in the Create blob action. Select one of the blobs and then **Edit blob**.
+ \
+ [![Blob data](media/logs-export-logic-app/blob-data.png "Screenshot of sample data exported to blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
## Next steps
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
na Previously updated : 07/22/2022 Last updated : 08/22/2022 # Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes
There are two areas of focus: light load and upper limit. The following lists de
* Average throughput decreased by 77% * Average latency increased by 1.6 ms
+## Performance considerations with `nconnect`
++ ## Next steps * [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md)
azure-percept Azure Percept On Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/hci/azure-percept-on-azure-stack-hci-overview.md
Previously updated : 08/15/2022 Last updated : 08/22/2022 # Azure Percept on Azure Stack HCI overview
The Percept VM leverages Azure IoT Edge to communicate with [Azure IoT Hub](http
Whether you're a beginner, an expert, or anywhere in between, from zero to low code, to creating or bringing your own models, Azure Percept has a solution development path for you to build your Edge artificial intelligence (AI) solution. Azure Percept has three solution development paths that you can use to build Edge AI solutions: Azure Percept Studio, Azure Percept for DeepStream, and Azure Percept Open-Source Project. You aren't limited to one path; you can choose any or all of them depending on your business needs. For more information about the solution development paths, visit [Azure Percept solution development paths overview](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EU92ZnNynDBGuVn3P5Xr5gcBFKS5HQguZm7O5sEENPUvPA?e=33T6Vi). #### *Azure Percept Studio*
-[Azure Percept Studio](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EeyEj0dBcplEs9LSFaz95DsBApnmxRMdjZ9I3QinSgO0yA?e=cbIJkI) is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
+[Azure Percept Studio](/azure/azure-percept/studio/azure-percept-studio-overview) is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
#### *Azure Percept for DeepStream*
-[Azure Percept for DeepStream](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/ETDSdi6ruptBkwMqvLPRL90Bzv3ORhpmAZ1YLeGt1LvtVA?e=lY2Q4f&CID=DDDB383F-4BFE-4C97-86A7-70766B16EB93&wdLOR=cDA23C19C-5685-46EC-BA28-7C9DEC460A5B&isSPOFile=1&clickparams=eyJBcHBOYW1lIjoiVGVhbXMtRGVza3RvcCIsIkFwcFZlcnNpb24iOiIyNy8yMjA3MzEwMTAwNSIsIkhhc0ZlZGVyYXRlZFVzZXIiOmZhbHNlfQ%3D%3D) includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models (BYOM). DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
+[Azure Percept for DeepStream](/azure/azure-percept/deepstream/azure-percept-for-deepstream-overview) includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models (BYOM). DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
#### *Azure Percept Open-Source Project*
-[Azure Percept Open-Source Project](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/Eeoh0pZk5g1MqwJZUAZFEvEBMYmfAqdibII6Znm-PnnDIQ?e=4ZDfUT) is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. Azure Percept Open-Source Project is fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. It's a self-managed solution where you host the environment in your own cluster.
+[Azure Percept Open-Source Project](/azure/azure-percept/open-source/azure-percept-open-source-project-overview) is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. Azure Percept Open-Source Project is fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. It's a self-managed solution where you host the environment in your own cluster.
## Next steps
azure-portal How To Manage Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-manage-azure-support-request.md
Title: Manage an Azure support request
description: Learn about viewing support requests and how to send messages, upload files, and manage options. tags: billing Previously updated : 02/07/2022 Last updated : 07/21/2022 # To add: close and reopen, review request status, update contact info # Manage an Azure support request
-After you [create an Azure support request](how-to-create-azure-support-request.md), you can manage it in the [Azure portal](https://portal.azure.com). You can also create and manage requests programmatically by using the [Azure support ticket REST API](/rest/api/support) or [Azure CLI](/cli/azure/azure-cli-support-request). Additionally, you can view your open requests in the [Azure mobile app](https://azure.microsoft.com/get-started/azure-portal/mobile-app/).
+After you [create an Azure support request](how-to-create-azure-support-request.md), you can manage it in the [Azure portal](https://portal.azure.com).
+
+> [!TIP]
+> You can create and manage requests programmatically by using the [Azure support ticket REST API](/rest/api/support) or [Azure CLI](/cli/azure/azure-cli-support-request). Additionally, you can view open requests, reply to your support engineer, or edit the severity of your ticket in the [Azure mobile app](https://azure.microsoft.com/get-started/azure-portal/mobile-app/).
To manage a support request, you must have the [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role at the subscription level. To manage a support request that was created without a subscription, you must be an [Admin](../../active-directory/roles/permissions-reference.md). ## View support requests
-View the details and status of support requests by going to **Help + support** > **All support requests**.
+View the details and status of support requests by going to **Help + support** > **All support requests** in the Azure portal.
:::image type="content" source="media/how-to-manage-azure-support-request/all-requests-lower.png" alt-text="All support requests":::
You can use the file upload option to upload diagnostic files or any other files
1. On the **All support requests** page, select the support request.
-1. On the **Support Request** page, browse to find your file, then select **Upload**. Repeat the process if you have multiple files.
-
- :::image type="content" source="media/how-to-manage-azure-support-request/file-upload.png" alt-text="Upload file":::
+1. On the **Support Request** page, select the **File upload** box, then browse to find your file and select **Upload**. Repeat the process if you have multiple files.
### File upload guidelines
azure-resource-manager Linter Rule Outputs Should Not Contain Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md
The following example shows a secure pattern for retrieving a storageAccount key
output storageId string = stg.id ```
-Which can be used in a subsequent deployment as sown in the following example
+Which can be used in a subsequent deployment as shown in the following example
```bicep someProperty: listKeys(myStorageModule.outputs.storageId.value, '2021-09-01').keys[0].value
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/overview.md
Title: Overview of Azure Managed Applications
-description: Describes the concepts for Azure Managed Applications that provide cloud solutions that are easy for consumers to deploy and operate.
+description: Describes the concepts for Azure Managed Applications that provide cloud solutions that are easy for customers to deploy and operate.
Previously updated : 08/03/2022 Last updated : 08/19/2022 # Azure Managed Applications overview
-Azure Managed Applications enable you to offer cloud solutions that are easy for consumers to deploy and operate. You implement the infrastructure and provide ongoing support. To make a managed application available to all customers, publish it in Azure Marketplace. To make it available to only users in your organization, publish it to an internal catalog.
+Azure Managed Applications enable you to offer cloud solutions that are easy for customers to deploy and operate. You implement the infrastructure and provide ongoing support. To make a managed application available to all customers, publish it in Azure Marketplace. To make it available to only users in your organization, publish it to an internal catalog.
-A managed application is similar to a solution template in Azure Marketplace, with one key difference. In a managed application, the resources are deployed to a resource group that's managed by the publisher of the app. The resource group is present in the consumer's subscription, but an identity in the publisher's tenant has access to the resource group. As the publisher, you specify the cost for ongoing support of the solution.
+A managed application is similar to a solution template in Azure Marketplace, with one key difference. In a managed application, the resources are deployed to a resource group that's managed by the publisher of the app. The resource group is present in the customer's subscription, but an identity in the publisher's tenant has access to the resource group. As the publisher, you specify the cost for ongoing support of the solution.
> [!NOTE] > The documentation for Azure Custom Providers used to be included with Managed Applications. That documentation was moved to [Azure Custom Providers](../custom-providers/overview.md). ## Advantages of managed applications
-Managed applications reduce barriers to consumers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Consumers have limited access to the critical resources and don't need to worry about making a mistake when managing it.
+Managed applications reduce barriers to customers using your solutions. They don't need expertise in cloud infrastructure to use your solution. Customers have limited access to the critical resources and don't need to worry about making a mistake when managing it.
-Managed applications enable you to establish an ongoing relationship with your consumers. You define terms for managing the application and all charges are handled through Azure billing.
+Managed applications enable you to establish an ongoing relationship with your customers. You define terms for managing the application and all charges are handled through Azure billing.
Although customers deploy managed applications in their subscriptions, they don't have to maintain, update, or service them. You can make sure that all customers are using approved versions. Customers don't have to develop application-specific domain knowledge to manage these applications. Customers automatically acquire application updates without the need to worry about troubleshooting and diagnosing issues with the applications. For IT teams, managed applications enable you to offer pre-approved solutions to users in the organization. You know these solutions are compliant with organizational standards.
-Managed Applications support [managed identities for Azure resources](./publish-managed-identity.md).
+Managed applications support [managed identities for Azure resources](./publish-managed-identity.md).
## Types of managed applications You can publish your managed application either internally in the service catalog or externally in Azure Marketplace. ### Service catalog
For information about publishing a managed application to Azure Marketplace, see
## Resource groups for managed applications
-Typically, the resources for a managed application are in two resource groups. The consumer manages one resource group, and the publisher manages the other resource group. When the managed application is defined, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that is constrained to a time period.
+Typically, the resources for a managed application are in two resource groups. The customer manages one resource group, and the publisher manages the other resource group. When the managed application is defined, the publisher specifies the levels of access. The publisher can request either a permanent role assignment, or [just-in-time access](request-just-in-time-access.md) for an assignment that's constrained to a time period.
Restricting access for [data operations](../../role-based-access-control/role-definitions.md) is currently not supported for all data providers in Azure.
-The following image shows a scenario where the publisher requests the owner role for the managed resource group. The publisher placed a read-only lock on this resource group for the consumer. The publisher's identities that are granted access to the managed resource group are exempt from the lock.
+The following image shows the relationship between the customer's Azure subscription and the publisher's Azure subscription. The managed application and managed resource group are in the customer's subscription. The publisher has management access to the managed resource group to maintain the managed application's resources. The publisher places a read-only lock on the managed resource group that limits the customer's access to manage resources. The publisher's identities that have access to the managed resource group are exempt from the lock.
### Application resource group
-This resource group holds the managed application instance. This resource group may only contain one resource. The resource type of the managed application is [Microsoft.Solutions/applications](/azure/templates/microsoft.solutions/applications).
+This resource group holds the managed application instance. This resource group may only contain one resource. The resource type of the managed application is [Microsoft.Solutions/applications](#resource-provider).
-The consumer has full access to the resource group and uses it to manage the lifecycle of the managed application.
+The customer has full access to the resource group and uses it to manage the lifecycle of the managed application.
### Managed resource group
-This resource group holds all the resources that are required by the managed application. For example, this resource group contains the virtual machines, storage accounts, and virtual networks for the solution. The consumer has limited access to this resource group because the consumer doesn't manage the individual resources for the managed application. The publisher's access to this resource group corresponds to the role specified in the managed application definition. For example, the publisher might request the Owner or Contributor role for this resource group. The access is either permanent or limited to a specific time.
+This resource group holds all the resources that are required by the managed application. For example, this resource group contains the virtual machines, storage accounts, and virtual networks for the solution. The customer has limited access to this resource group because the customer doesn't manage the individual resources for the managed application. The publisher's access to this resource group corresponds to the role specified in the managed application definition. For example, the publisher might request the Owner or Contributor role for this resource group. The access is either permanent or limited to a specific time.
-When the [managed application is published to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant consumers the ability to perform specific actions on resources in the managed resource group. For example, the publisher can specify that consumers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a consumer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the consumer's tenant scoped to include the managed resource group.
+When the [managed application is published to the marketplace](../../marketplace/azure-app-offer-setup.md), the publisher can grant customers the ability to perform specific actions on resources in the managed resource group. For example, the publisher can specify that customers can restart virtual machines. All other actions beyond read actions are still denied. Changes to resources in a managed resource group by a customer with granted actions are subject to the [Azure Policy](../../governance/policy/overview.md) assignments within the customer's tenant scoped to include the managed resource group.
-When the consumer deletes the managed application, the managed resource group is also deleted.
+When the customer deletes the managed application, the managed resource group is also deleted.
+
+## Resource provider
+
+Managed applications use the `Microsoft.Solutions` resource provider with ARM template JSON. For more information, see the resource types and API versions.
+
+- [Microsoft.Solutions/applicationDefinitions](/azure/templates/microsoft.solutions/applicationdefinitions?pivots=deployment-language-arm-template)
+- [Microsoft.Solutions/applications](/azure/templates/microsoft.solutions/applications?pivots=deployment-language-arm-template)
+- [Microsoft.Solutions/jitRequests](/azure/templates/microsoft.solutions/jitrequests?pivots=deployment-language-arm-template)
## Azure Policy
You can apply an [Azure Policy](../../governance/policy/overview.md) to audit yo
In this article, you learned about benefits of using managed applications. Go to the next article to create a managed application definition. > [!div class="nextstepaction"]
-> [Quickstart: Create and publish a managed application definition](publish-service-catalog-app.md)
+> [Quickstart: Create and publish an Azure managed application definition](publish-service-catalog-app.md)
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Previously updated : 08/16/2022 Last updated : 08/22/2022 # Quickstart: Create and publish an Azure Managed Application definition
Add the following JSON and save the file.
} ```
-For more information about the ARM template's properties, see [Microsoft.Solutions](/azure/templates/microsoft.solutions/applicationdefinitions).
+For more information about the ARM template's properties, see [Microsoft.Solutions/applicationDefinitions](/azure/templates/microsoft.solutions/applicationdefinitions?pivots=deployment-language-arm-template). Managed applications only use ARM template JSON.
### Deploy the definition
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results. Some operations, which don't seem
- A read-only lock on a **Log Analytics workspace** prevents **User and Entity Behavior Analytics (UEBA)** from being enabled.
+- A delete-only lock on a **Log Analytics workspace** does not prevent [data purge operations](../../azure-monitor/logs/personal-data-mgmt.md#delete), remove the [data purge](../../role-based-access-control/built-in-roles.md#data-purger role from the user instead.
+ - A read-only lock on a **subscription** prevents **Azure Advisor** from working correctly. Advisor is unable to store the results of its queries. - A read-only lock on an **Application Gateway** prevents you from getting the backend health of the application gateway. That [operation uses a POST method](/rest/api/application-gateway/application-gateways/backend-health), which a read-only lock blocks.
In the request, include a JSON object that specifies the lock properties.
- To learn about logically organizing your resources, see [Using tags to organize your resources](tag-resources.md). - You can apply restrictions and conventions across your subscription with customized policies. For more information, see [What is Azure Policy?](../../governance/policy/overview.md).-- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
+- For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see [Azure enterprise scaffold - prescriptive subscription governance](/azure/architecture/cloud-adoption-guide/subscription-governance).
azure-resource-manager Template Tutorial Add Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-tags.md
Title: Tutorial - add tags to resources in template description: Add tags to resources that you deploy in your Azure Resource Manager template (ARM template). Tags let you logically organize resources. Previously updated : 03/27/2020 Last updated : 08/22/2022
# Tutorial: Add tags in your ARM template
-In this tutorial, you learn how to add tags to resources in your Azure Resource Manager template (ARM template). [Tags](../management/tag-resources.md) help you logically organize your resources. The tag values show up in cost reports. This tutorial takes **8 minutes** to complete.
+In this tutorial, you learn how to add tags to resources in your Azure Resource Manager template (ARM template). [Tags](../management/tag-resources.md) are metadata elements made up of key-value pairs that help you identify resources and show up in cost reports. This instruction takes **8 minutes** to complete.
## Prerequisites
-We recommend that you complete the [tutorial about Quickstart templates](template-tutorial-quickstart-template.md), but it's not required.
+We recommend that you complete the [tutorial about Quickstart Templates](template-tutorial-quickstart-template.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-Your previous template deployed a storage account, App Service plan, and web app.
+Your previous template deployed a storage account, an App Service plan, and a web app.
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/quickstart-template/azuredeploy.json":::
-After deploying these resources, you might need to track costs and find resources that belong to a category. You can add tags to help solve these issues.
+After you deploy these resources, you might need to track costs and find resources that belong to a category. You can add tags to help solve these issues.
## Add tags
-You tag resources to add values that help you identify their use. For example, you can add tags that list the environment and the project. You could add tags that identify a cost center or the team that owns the resource. Add any values that make sense for your organization.
+You tag resources to add values that help you identify their use. You can add tags that list the environment and the project. You can also add them to identify a cost center or the team that owns the resource. Add any values that make sense for your organization.
The following example highlights the changes to the template. Copy the whole file and replace your template with its contents.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
## Verify deployment
You can verify the deployment by exploring the resource group from the Azure por
If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-In this tutorial, you added tags to the resources. In the next tutorial, you'll learn how to use parameter files to simplify passing in values to the template.
+In this tutorial, you add tags to the resources. In the next tutorial, you learn how to use parameter files to simplify passing in values to the template.
> [!div class="nextstepaction"] > [Use parameter file](template-tutorial-use-parameter-file.md)
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
With a trial account, you don't have to set up an Azure subscription. When creat
## Create accounts
-* ARM accounts: [Get started with Azure Video Indexer in Azure portal](create-account-portal.md). **The recommended paid account type is the ARM-based account**.
-
- * Upgrade a trial account to an ARM based account and [**import** your content for free](connect-to-azure.md#import-your-content-from-the-trial-account).
+* ARM accounts: **The recommended paid account type is the ARM-based account**.
+
+ * You can create an Azure Video Indexer **ARM-based** account through one of the following:
+
+ 1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+ 2. [Azure portal](https://portal.azure.com/#home)
+
+ For the detailed description, [Get started with Azure Video Indexer in Azure portal](create-account-portal.md).
+* Upgrade a trial account to an ARM-based account and [import your content for free](import-content-from-trial.md).
* Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). * Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create an Azure Video Indexer account connected to Azure
-description: Learn how to create an Azure Video Indexer account connected to Azure.
+ Title: Create a classic Azure Video Indexer account connected to Azure
+description: Learn how to create a classic Azure Video Indexer account connected to Azure.
Last updated 05/03/2022
-# Create an Azure Video Indexer account
+# Create a classic Azure Video Indexer account
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-When creating an Azure Video Indexer account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
+This topic shows how to create a new classic account connected to Azure using the [Azure Video Indexer website](https://aka.ms/vi-portal-link). You can also create an Azure Video Indexer classic account through our [API](https://aka.ms/avam-dev-portal).
-> [!NOTE]
-> Before creating a new account, review [Account types](accounts-overview.md).
-
-* You can create an Azure Video Indexer **classic** account through our [API](https://aka.ms/avam-dev-portal).
-* You can create an Azure Video Indexer **ARM-based** account through one of the following:
-
- 1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
- 2. [Azure portal](https://portal.azure.com/#home)
-
-To read more on how to create a **new ARM-Based** Azure Video Indexer account, read this [article](create-video-analyzer-for-media-account.md)
+The topic discusses prerequisites that you need to connect to your Azure subscription and how to configure an Azure Media Services account.
-For more details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+A few Azure Video Indexer account types are available to you. For detailed explanation, review [Account types](accounts-overview.md).
-## How to create classic accounts
-
-This article shows how to create an Azure Video Indexer classic account. The topic provides steps for connecting to Azure using the automatic (default) flow. It also shows how to connect to Azure manually (advanced).
-
-If you are moving from a *trial* to *paid ARM-Based* Azure Video Indexer account, you can choose to copy all of the videos and model customization to the new account, as discussed in the [Import your content from the trial account](#import-your-content-from-the-trial-account) section.
-
-The article also covers [Linking an Azure Video Indexer account to Azure Government](#azure-video-indexer-in-azure-government).
+For the pricing details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
## Prerequisites for connecting to Azure
The article also covers [Linking an Azure Video Indexer account to Azure Governm
This user should be an Azure AD user with a work or school account. Don't use a personal account, such as outlook.com, live.com, or hotmail.com. :::image type="content" alt-text="Screenshot that shows how to choose a user in your Azure A D domain." source="./media/create-account/all-aad-users.png":::-
-### Additional prerequisites for automatic flow
- * A user and member in your Azure AD domain. You'll use this member when connecting your Azure Video Indexer account to Azure.
The article also covers [Linking an Azure Video Indexer account to Azure Governm
This user should be a member in your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md). :::image type="content" alt-text="Screenshot that shows the access control settings." source="./media/create-account/access-control-iam.png":::-
-### Additional prerequisites for manual flow
- * Register the Event Grid resource provider using the Azure portal. In the [Azure portal](https://portal.azure.com/), go to **Subscriptions**->[subscription]->**ResourceProviders**.
- Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, click **Register**. It takes a couple of minutes to register.
+ Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, select **Register**. It takes a couple of minutes to register.
:::image type="content" alt-text="Screenshot that shows how to select an Event Grid subscription." source="./media/create-account/event-grid.png":::
-## Connect to Azure manually (advanced option)
-
-If the connection to Azure failed, you can attempt to troubleshoot the problem by connecting manually.
+## Connect to Azure
> [!NOTE]
-> It's mandatory to have the following three accounts in the same region: the Azure Video Indexer account that you're connecting with the Media Services account, as well as the Azure storage account connected to the same Media Services account. When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
+> Use the same Azure AD user you used when connecting to Azure.
+
+It's mandatory to have the following three accounts located in the same region:
+
+* The Azure Video Indexer account that you're creating.
+* The Azure Video Indexer account that you're connecting with the Media Services account.
+* The Azure storage account connected to the same Media Services account.
+
+ When you create an Azure Video Indexer account and connect it to Media Services, the media and metadata files are stored in the Azure storage account associated with that Media Services account.
+
+If your storage account is behind a firewall, see [storage account that is behind a firewall](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
### Create and configure a Media Services account
If the connection to Azure failed, you can attempt to troubleshoot the problem b
:::image type="content" alt-text="Screenshot that shows how to specify a storage account." source="./media/create-account/create-new-ams-account.png"::: > [!NOTE]
- > Make sure to write down the Media Services resource and account names. You'll need them for the steps in the next section.
-
+ > Make sure to write down the Media Services resource and account names.
1. Before you can play your videos in the Azure Video Indexer web app, you must start the default **Streaming Endpoint** of the new Media Services account. In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start. :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
+1. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
1. In the new Media Services account, select **API access**. 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
If the connection to Azure failed, you can attempt to troubleshoot the problem b
> [!NOTE] > Make sure to write down the key value and the Application ID. You'll need it for the steps in the next section.
-### Connect manually
-
-In the **Create a new account on an Azure subscription** dialog of your [Azure Video Indexer](https://www.videoindexer.ai/) page, select the **Switch to manual configuration** link.
-
-In the dialog, provide the following information:
-
-|Setting|Description|
-|||
-|Azure Video Indexer account region|The name of the Azure Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
-|Azure AD tenant|The name of the Azure AD tenant, for example "contoso.onmicrosoft.com". The tenant information can be retrieved from the Azure portal. Place your cursor over the name of the signed-in user in the top-right corner. Find the name to the right of **Domain**.|
-|Subscription ID|The Azure subscription under which this connection should be created. The subscription ID can be retrieved from the Azure portal. Select **All services** in the left panel, and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
-|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
-|Media service resource name|The name of the Azure Media Services account that you created in the previous section.|
-|Application ID|The Azure AD application ID (with permissions for the specified Media Services account) that you created in the previous section.|
-|Application key|The Azure AD application key that you created in the previous section. |
-
-### Import your content from the *trial* account
-
-When creating a new **ARM-Based** account, you have an option to import your content from the *trial* account into the new **ARM-Based** account free of charge.
-> [!NOTE]
-> * Import from trial can be performed only once per trial account.
-> * The target ARM-Based account needs to be created and available before import is assigned.
-> * Target ARM-Based account has to be an empty account (never indexed any media files).
-
-To import your data, follow the steps:
- 1. Go to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
- 2. Select your trial account and go to the *account settings* page
- 3. Click the *Import content to an ARM-based account*
- 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
- * If the account ID isn't showing, you can copy and paste the account ID from Azure portal or the account list, on the side blade in the Azure Video Indexer Portal.
- 5. Click **Import content**
-
- :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
-
-All media and content model customizations will be copied from the *trial* account into the new ARM-Based account.
--
-> [!NOTE]
->
-> The *trial* account is not availagle on the Azure Government cloud.
-
-## Azure Media Services considerations
+### Azure Media Services considerations
The following Azure Media Services related considerations apply:
The following Azure Media Services related considerations apply:
![Media Services reserved units](./media/create-account/ams-reserved-units.png)
+## Create a classic account
+
+1. On the [Azure Video Indexer website](https://aka.ms/vi-portal-link), select **Create unlimited account** (the paid account).
+2. To create a classic account, select **Switch to manual configuration**.
+
+In the dialog, provide the following information:
+
+|Setting|Description|
+|||
+|Azure Video Indexer account region|The name of the Azure Video Indexer account region. For better performance and lower costs, it's highly recommended to specify the name of the region where the Azure Media Services resource and Azure Storage account are located. |
+|Azure AD tenant|The name of the Azure AD tenant, for example "contoso.onmicrosoft.com". The tenant information can be retrieved from the Azure portal. Place your cursor over the name of the signed-in user in the top-right corner. Find the name to the right of **Domain**.|
+|Subscription ID|The Azure subscription under which this connection should be created. The subscription ID can be retrieved from the Azure portal. Select **All services** in the left panel, and search for "subscriptions". Select **Subscriptions** and choose the desired ID from the list of your subscriptions.|
+|Azure Media Services resource group name|The name for the resource group in which you created the Media Services account.|
+|Media service resource name|The name of the Azure Media Services account that you created in the previous section.|
+|Application ID|The Azure AD application ID (with permissions for the specified Media Services account) that you created in the previous section.|
+|Application key|The Azure AD application key that you created in the previous section. |
+
+## Import your content from the trial account
+
+See [Import your content from the trial account](import-content-from-trial.md).
+ ## Automate creation of the Azure Video Indexer account To automate the creation of the account is a two steps process:
To automate the creation of the account is a two steps process:
### Prerequisites for connecting to Azure Government -- An Azure subscription in [Azure Government](../azure-government/index.yml).
+- An Azure subscription in [Azure Government](../azure-government/index.yml).
- An Azure AD account in Azure Government.-- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure). Make sure to check [Additional prerequisites for automatic flow](#additional-prerequisites-for-automatic-flow) and [Additional prerequisites for manual flow](#additional-prerequisites-for-manual-flow).
+- All pre-requirements of permissions and resources as described above in [Prerequisites for connecting to Azure](#prerequisites-for-connecting-to-azure).
### Create new account via the Azure Government portal
To automate the creation of the account is a two steps process:
To create a paid account via the Azure Video Indexer portal: 1. Go to https://videoindexer.ai.azure.us
-1. Log in with your Azure Government Azure AD account.
-1. If you do not have any Azure Video Indexer accounts in Azure Government that you are an owner or a contributor to, you will get an empty experience from which you can start creating your account.
+1. Sign-in with your Azure Government Azure AD account.
+1. If you don't have any Azure Video Indexer accounts in Azure Government that you're an owner or a contributor to, you'll get an empty experience from which you can start creating your account.
The rest of the flow is as described in above, only the regions to select from will be Government regions in which Azure Video Indexer is available
- If you already are a contributor or an admin of an existing one or more Azure Video Indexer accounts in Azure Government, you will be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
+ If you already are a contributor or an admin of an existing one or more Azure Video Indexer accounts in Azure Government, you'll be taken to that account and from there you can start a following steps for creating an additional account if needed, as described above.
### Create new account via the API on Azure Government
To create a paid account in Azure Government, follow the instructions in [Create
In the public cloud when content is deemed offensive based on a content moderation, the customer can ask for a human to look at that content and potentially revert that decision. * No trial accounts.
-* Bing description - in Gov cloud we will not present a description of celebrities and named entities identified. This is a UI capability only.
+* Bing description - in Gov cloud we won't present a description of celebrities and named entities identified. This is a UI capability only.
## Clean up resources
-After you are done with this tutorial, delete resources that you are not planning to use.
+After you're done with this tutorial, delete resources that you aren't planning to use.
### Delete an Azure Video Indexer account
Select the account -> **Settings** -> **Delete this account**.
The account will be permanently deleted in 90 days.
-## Firewall
-
-See [Storage account that is behind a firewall](faq.yml#can-a-storage-account-connected-to-the-media-services-account-be-behind-a-firewall).
- ## Next steps You can programmatically interact with your trial account and/or with your Azure Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
-You should use the same Azure AD user you used when connecting to Azure.
azure-video-indexer Import Content From Trial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/import-content-from-trial.md
+
+ Title: Import your content from the trial account
+description: Learn how to import your content from the trial account.
+ Last updated : 05/03/2022++++
+# Import your content from the trial account
+
+When creating a new ARM-based account, you have an option to import your content from the trial account into the new ARM-based account free of charge.
+
+> [!NOTE]
+> Make sure to review the following considerations.
+
+## Considerations
+
+Review the following considerations.
+
+* Import from trial can be performed only once per trial account.
+* The target ARM-based account needs to be created and available before import is assigned.
+* Target ARM-based account has to be an empty account (never indexed any media files).
+
+## Import your data
+
+To import your data, follow the steps:
+
+ 1. Go to [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+ 2. Select your trial account and go to the **Account settings** page.
+ 3. Click the **Import content to an ARM-based account**.
+ 4. From the dropdown menu choose the ARM-based account you wish to import the data to.
+
+ * If the account ID isn't showing, you can copy and paste the account ID from Azure portal or the account list, on the side blade in the Azure Video Indexer Portal.
+ 5. Click **Import content**
+
+ :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
+
+All media and content model customizations will be copied from the trial account into the new ARM-based account.
+
+## Next steps
+
+You can programmatically interact with your trial account and/or with your Azure Video Indexer accounts that are connected to Azure by following the instructions in: [Use APIs](video-indexer-use-apis.md).
+
+You should use the same Azure AD user you used when connecting to Azure.
azure-vmware Enable Hcx Access Over Internet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-hcx-access-over-internet.md
Last updated 7/19/2022
# Enable HCX access over the internet -
-In this article, you'll learn how to perform HCX migration over a Public IP address using Azure VMware Solution.
+In this article, you'll learn how to perform HCX migration over a public IP address using Azure VMware Solution.
>[!IMPORTANT]
->Before configuring a Public IP on your Azure VMware Solution private cloud, please consult your Network Administrator to understand the implications and the impact to your environment.
-
-You'll also learn how to pair HCX sites and create service mesh from on-premises to an Azure VMware Solution private cloud using a Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to an Azure VMware Solution private cloud over the public internet. This solution is useful when the customer is not using ExpressRoute or VPN connectivity with the Azure cloud.
-
-
-> [!IMPORTANT]
-> The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to the Azure VMware Solution private cloud.
-
-## Configure Public IP block
-
-To perform HCX Migration over the public internet, you'll need a minimum of six Public IP addresses. Five of these Public IP addresses will be used for the Public IP segment, and one will be used for configuring Network Address Translation (NAT). You can obtain the Public IP block by reserving a /29 from the Azure VMware Solution portal. Configure a Public IP block through portal by using the Public IP feature of the Azure VMware Solution private cloud.
+>Before configuring a public IP on your Azure VMware Solution private cloud, consult your network administrator to understand the implications and the impact to your environment.
-1. Sign in to Azure VMware Solution portal.
-1. Under **Workload Networking**, select **Public IP (preview)**.
-1. Select **+Public IP**.
-1. Enter the **Public IP name** and select the address space from the **Address space** drop-down list according to the number of IPs required, then select **Configure**.
- >[!Note]
- > It will take 15-20 minutes to configure the Public IP block on private cloud.
+You'll also learn how to pair HCX sites and create service mesh from on-premises to an Azure VMware Solution private cloud using Public IP. The service mesh allows you to migrate a workload from an on-premises datacenter to an Azure VMware Solution private cloud over the public internet. This solution is useful when the customer isn't using ExpressRoute or VPN connectivity with the Azure cloud.
-After the Public IP is configured successfully, you should see it appear under the Public IP section. The provisioning state shows **Succeeded**. This Public IP block is configured as NSX-T segment on the Tier-1 router.
+> [!IMPORTANT]
+> The on-premises HCX appliance should be reachable from the internet to establish HCX communication from on-premises to the Azure VMware Solution private cloud.
-For more information about how to enable a public IP to the NSX Edge for Azure VMware Solution, see [Enable Public IP to the NSX Edge for Azure VMware Solution](./enable-public-ip-nsx-edge.md).
+## Configure public IP block
-## Create Public IP segment on NSX-T
-Before you create a Public IP segment, get your credentials for NSX-T Manager from Azure VMware Solution portal.
+For HCX manager to be available over the public IP address, you'll need one public IP address for DNAT rule.
-1. Sign in to NSX-T Manager using credentials provided by the Azure VMware Solution portal.
-1. Under the **Manage** section, select **Identity**.
-1. Copy the NSX-T Manager admin user password.
+To perform HCX migration over the public internet, you'll need other IP addresses. You can have a /29 subnet to create minimum configuration when defining HCX network profile (usable IPs in subnet will be assigned to IX, NE appliances). You can choose a bigger subnet based on the requirements. You'll create an NSX-T segment using this public subnet. This segment can be used for creating HCX network profile.
-1. Browse the NSX-T Manger and paste the admin password in the password field, and select **Login**.
-1. Under the **Networking** section select **Connectivity** and **Segments**, then select **ADD SEGMENT**.
-1. Provide Segment name, select Tier-1 router as connected gateway, and provide the reserved Public IP under subnets. The Public IP block for this Public IP segment shouldn't include the first and last Public IPs from the overall Public IP block. For example, if you reserved 20.95.1.16/29, you would input 20.95.1.16/30.
-1. Select **Save**. ΓÇ»
+>[!Note]
+> After assigning a subnet to NSX-T segment, you can't use an IP from that subnet to create a DNAT rule. Both subnets should be different.
-## Assign public IP to HCX manager
-HCX manager of destination Azure VMware Solution SDDC should be reachable from the internet to do site pairing with source site. HCX Manager can be exposed by way of DNAT rule and a static null route. Because HCX Manager is in the provider space, not within the NSX-T environment, the null route is necessary to allow HCX Manager to route back to the client by way of the DNAT rule.
+Configure a Public IP block through portal by using the [Public IP feature of the Azure VMware Solution](enable-hcx-access-over-internet.MD#enable-hcx-access-over-the-internet) private cloud.
-### Add static null route to the T1 router
+## Use public IP address for Cloud HCX Manager public access
+Cloud HCX manager can be available over a public IP address by using a DNAT rule. However, since Cloud HCX manager is in the provider space, the null route is necessary to allow HCX Manager to route back to the client by way of the DNAT rule. It forces the NAT traffic through NSX-T Tier-0 router.
-The static null route is used to allow HCX private IP to route through the NSX T1 for public endpoints.
+## Add static null route to the Tier1 router
+The static null route is used to allow HCX private IP to route through the NSX Tier-1 for public endpoints. This static route can be the default Tier-1 router created in your private cloud or you can create a new tier-1 router.
1. Sign in to NSX-T manager, and select **Networking**. 1. Under the **Connectivity** section, select **Tier-1 Gateways**.
-1. Edit the existing T1 gateway.
+1. Edit the existing Tier-1 gateway.
1. Expand **STATIC ROUTES**. 1. Select the number next to **Static Routes**. 1. Select **ADD STATIC ROUTE**. A pop-up window is displayed. 1. Under **Name**, enter the name of the route.
-1. Under **network**, enter a non-overlapping /32 IP address under Network.
+1. Under **Network**, enter a non-overlapping /32 IP address under Network.
>[!NOTE]
- > This address should not overlap with any other IP addresses on the network.
+ > This address should not overlap with any other IP addresses on the private cloud network and the customer network.
+
+ :::image type="content" source="media/hcx-over-internet/hcx-sample-static-route.png" alt-text="Diagram showing a sample static route configuration." border="false" lightbox="media/hcx-over-internet/hcx-sample-static-route.png":::
1. Under **Next hops**, select **Set**. 1. Select **NULL** as IP Address.
- Leave defaults for Admin distance and scope.
-1. Select **ADD**, then select **APPLY**.
+ Leave defaults for Admin distance and scope.
+1. Select **ADD**, then select **APPLY**.
1. Select **SAVE**, then select **CLOSE**.
+ :::image type="content" source="media/hcx-over-internet/hcx-sample-null-route.png" alt-text="Diagram showing a sample Null route configuration." border="false" lightbox="media/hcx-over-internet/hcx-sample-null-route.png":::
1. Select **CLOSE EDITING**.
-### Add NAT rule to T1 gateway
+## Add NAT rule to Tier-1 gateway
->[!Note]
->The NAT rules should use a different Public IP address than your Public IP segment.
1. Sign in to NSX-T Manager, and select **Networking**. 1. Select **NAT**.
-1. Select the T1 Gateway.
+1. Select the Tier-1 Gateway. Use same Tier-1 router to create NAT rule that you used to create null route in previous steps.
1. Select **ADD NAT RULE**.
-1. Add one SNAT rule for HCX Manager.
+1. Add one SNAT rule and one DNAT rule for HCX Manager.
1. The DNAT Rule Destination is the Public IP for HCX Manager. The Translated IP is the HCX Manager IP in the cloud.
- 1. The SNAT Rule Source is the HCX Manager IP in the cloud. The Translated IP is the non-overlapping /32 IP from the Static Route.
-1. Make sure to set the Firewall option on DNAT rule to **Match External Address**.
-1. Create T1 Gateway Firewall rules to allow only expected traffic to the Public IP for HCX Manager and drop everything else.
- 1. Create a Gateway Firewall rule on the T1 that allows your On-Premise as the **Source IP** and the Azure VMware Solution reserved Public as the **Destination IP**. This rule should be the highest priority.
- 1. Create a Gateway Firewall rule on the T1 that denies all other traffic where the **Source IP** is and ΓÇ£AnyΓÇ¥ and **Destination IP** is the Azure VMware Solution reserved Public IP.
+ 1. The SNAT Rule Destination is the HCX Manager IP in the cloud. The Translated IP is the non-overlapping /32 IP from the Static Route.
+ 1. Make sure to set the Firewall option on DNAT rule to **Match External Address**.
+ :::image type="content" source="media/hcx-over-internet/hcx-sample-public-access-route.png" alt-text="Diagram showing a sample NAT rule for public access of HCX Virtual machine." border="false" lightbox="media/hcx-over-internet/hcx-sample-public-access-route.png":::
+
+1. Create Tier-1 Gateway Firewall rules to allow only expected traffic to the Public IP for HCX Manager and drop everything else.
+ 1. Create a Gateway Firewall rule on the T1 that allows your on-premises as the **Source IP** and the Azure VMware Solution reserved Public as the **Destination IP**. This rule should be the highest priority.
+ 1. Create a Gateway Firewall rule on the Tier-1 that denies all other traffic where the **Source IP** is **Any** and **Destination IP** is the Azure VMware Solution reserved Public IP.
+
+For more information, see [HCX ports](https://ports.esp.vmware.com/home/VMware-HCX)
->[!NOTE]
+> [!NOTE]
> HCX manager can now be accessed over the internet using public IP.
-### Create network profile for HCX at destination site
-1. Sign in to Destination HCX Manager.
-1. Select **Interconnect** and then select the **Network Profiles** tab.
-1. Select **Create Network Profile**.
-1. Select **NSX Networks** as network type under **Network**.
-1. Select the **Public-IP-Segment** created on NSX-T.
-1. Enter **Name**.
-1. Under IP pools, enter the **IP Ranges** for HCX uplink, **Prefix Length**, and **Gateway** of public IP segment.
-1. Scroll down and select the **HCX Uplink** checkbox under **HCX Traffic Type** as this profile will be used for HCX uplink.
-1. To create the Network Profile, select **Create**.
+## Pair sites using HCX Cloud manager's public IP address
-### Pair site
-Site pairing is required to create service mesh between source and destination sites.
+Site pairing is required before you create service mesh between source and destination sites.
1. Sign in to the **Source** site HCX Manager.
-1. Select **Site Pairing** and select **ADD SITE PAIRING**.
-1. Enter the remote HCX URL and sign in credentials, then select **Connect**.
+1. Select **Site Pairing** and select **ADD SITE PAIRING**.
+1. Enter the **Cloud HCX Manager Public URL** as remote site and sign in credentials, then select **Connect**.
After pairing is done, it will appear under site pairing.
-### Create service mesh
-Service Mesh will deploy HCX WAN Optimizer, HCX Network Extension and HCX-IX appliances.
+## Create public IP segment on NSX-T
+Before you create a Public IP segment, get your credentials for NSX-T Manager from Azure VMware Solution portal.
+
+1. Under the **Networking** section select **Connectivity**, **Segments**, and then select **ADD SEGMENT**.
+1. Provide Segment name, select **Tier-1 router** as connected gateway, and provide the reserved public IP under subnets.
+1. Select **Save**. ΓÇ»
+
+## Create network profile for HCX at destination site
+1. Sign in to Destination HCX Manager (cloud manager in this case).
+1. Select **Interconnect** and then select the **Network Profiles** tab.
+1. Select **Create Network Profile**.
+1. Select **NSX Networks** as network type under **Network**.
+1. Select the **Public-IP-Segment** created on NSX-T.
+1. Enter **Name**.
+1. Under IP pools, enter the **IP Ranges** for HCX uplink, **Prefix Length**, and **Gateway** of public IP segment.
+1. Scroll down and select the **HCX Uplink** checkbox under **HCX Traffic Type** as this profile will be used for HCX uplink.
+1. Select **Create** to create the network profile.
+
+## Create service mesh
+Service Mesh will deploy HCX WAN Optimizer, HCX Network Extension and HCX-IX appliances.
1. Sign in to **Source** site HCX Manager. 1. Select **Interconnect** and then select the **Service Mesh** tab.
-1. Select **CREATE SERVICE MESH**.
-1. Select the **destination** site to create service mesh with and select **Continue**.
+1. Select **CREATE SERVICE MESH**.
+1. Select the **destination** site to create service mesh with and then select **Continue**.
1. Select the compute profiles for both sites and select **Continue**.
-1. Select the HCX services to be activated and select **Continue**.
+1. Select the HCX services to be activated and select **Continue**.
>[!Note]
- >Premium services require an additional HCX Enterprise license.
-1. Select the Network Profile of source site.
-1. Select the Network Profile of Destination that you created in the Network Profile section.
+ >Premium services require an additional HCX Enterprise license.
+1. Select the network profile of source site.
+1. Select the network profile of destination that you created in the **Network Profile** section.
1. Select **Continue**.
-1. Review the Transport Zone information, and then select **Continue**.
-1. Review the Topological view, and select **Continue**.
-1. Enter the Service Mesh name and select **FINISH**.
-
-### Extend network
-The HCX Network Extension service provides layer 2 connectivity between sites. The extension service also allows you to keep the same IP and MAC addresses during virtual machine migrations.
-1. Sign in to **source** HCX Manager.
-1. Under the **Network Extension** section, select the site for which you want to extend the network, and then select **EXTEND NETWORKS**.
-1. Select the network that you want to extend to destination site, and select **Next**.
+1. Review the **Transport Zone** information, and then select **Continue**.
+1. Review the **Topological view**, and select **Continue**.
+1. Enter the **Service Mesh name** and select **FINISH**.
+1. Add the public IP addresses in firewall to allow required ports only.
+
+## Extend network
+The HCX Network Extension service provides layer 2 connectivity between sites. The extension service also allows you to keep the same IP and MAC addresses during virtual machine migrations.
+1. Sign in to **source** HCX Manager.
+1. Under the **Network Extension** section, select the site for which you want to extend the network, and then select **EXTEND NETWORKS**.
+1. Select the network that you want to extend to destination site, and select **Next**.
1. Enter the subnet details of network that you're extending.
-1. Select the destination first hop route (T1), and select **Submit**.
-1. Sign in to the **destination** NSX, you'll see Network 10.14.27.1/24 has been extended.
+1. Select the destination first hop route (Tier-1), and select **Submit**.
+1. Sign in to the **destination** NSX, you'll see Network 10.14.27.1/24 has been extended.
-After the network is extended to destination site, VMs can be migrated over Layer 2 Extension.
+After the network is extended to destination site, VMs can be migrated over Layer 2 extension.
-## Next steps
+## Next steps
[Enable Public IP to the NSX Edge for Azure VMware Solution](./enable-public-ip-nsx-edge.md) For detailed information on HCX network underlay minimum requirements, see [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html).
cognitive-services Get Analytics Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/get-analytics-knowledge-base.md
QnA Maker stores all chat logs and other telemetry, if you have enabled Applicat
| join kind= inner ( traces | extend id = operation_ParentId ) on id
+ | where message == "QnAMaker GenerateAnswer"
| extend question = tostring(customDimensions['Question']) | extend answer = tostring(customDimensions['Answer']) | extend score = tostring(customDimensions['Score'])
cognitive-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-azure-ad-auth.md
$resourceId = resource.Id
*** ## Create the Speech SDK configuration object+ With an Azure AD access token, you can now create a Speech SDK configuration object. The method of providing the token, and the method to construct the corresponding Speech SDK ```Config``` object varies by the object you'll be using.
-### SpeechRecognizer, IntentRecognizer, ConversationTranscriber, and SpeechSynthesizer
+### SpeechRecognizer, SpeechSynthesizer, IntentRecognizer, ConversationTranscriber, and SourceLanguageRecognizer
-For ```SpeechRecognizer```, ```IntentRecognizer```, ```ConversationTranscriber```, and ```SpeechSynthesizer``` objects, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechConfig``` object.
+For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber```, and ```SourceLanguageRecognizer``` objects, build the authorization token from the resource ID and the Azure AD access token and then use it to create a ```SpeechConfig``` object.
::: zone pivot="programming-language-csharp" ```C#
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Unresolved errors listed in the next table affect the quality of training, but d
| Script | Non-normalized text|This script contains symbols. Normalize the symbols to match the audio. For example, normalize *50%* to *fifty percent*.| | Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.| | Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
+| Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.| | Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.| | Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
For more information, [learn more about the capabilities and limits of this feat
- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) - [How to record voice samples](record-custom-voice-samples.md) - [Text-to-Speech API reference](rest-text-to-speech.md)-- [Long Audio API](long-audio-api.md)
+- [Long Audio API](long-audio-api.md)
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
# Support for Teams identity in Calling SDK - The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, check out [Calling quickstarts](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Key features of the Calling SDK:
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
# Join a Teams meeting
-> [!IMPORTANT]
-> BYOI interoperability is now generally available to all Communication Services applications and Teams organizations.
- Azure Communication Services can be used to build applications that enable users to join and participate in Teams meetings. [Standard Azure Communication Services pricing](https://azure.microsoft.com/pricing/details/communication-services/) applies to these users, but there's no additional fee for the interoperability capability itself. With the bring your own identity (BYOI) model, you control user authentication and users of your applications don't need Teams licenses to join Teams meetings. This is ideal for applications that enable licensed Teams users and external users using a custom application to join into a virtual consultation experience. For example, healthcare providers using Teams can conduct teleheath virtual visits with their patients who use a custom application. It's also possible to use Teams identities with the Azure Communication Services SDKs. More information is available [here](./teams-interop.md).
Microsoft will indicate to you via the Azure Communication Services API that rec
- [How-to: Join a Teams meeting](../how-tos/calling-sdk/teams-interoperability.md) - [Quickstart: Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
+- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
If you require sending an amount of messages that exceeds the rate-limits, pleas
For more information on the SMS SDK and service, see the [SMS SDK overview](./sms/sdk-features.md) page or the [SMS FAQ](./sms/sms-faq.md) page.
+## Email
+Sending high volume of messages has a set of limitation on the number of email messages that you can send. If you hit these limits, your messages will not be queued to be sent. You can submit these requests again, once the Retry-After time expires.
+
+### Rate Limits
+
+|Operation|Scope|Timeframe (minutes)| Limit (number of email) |
+||--|-|-|
+|Send Email|Per Subscription|1|10|
+|Send Email|Per Subscription|60|25|
+|Get Email Status|Per Subscription|1|20|
+|Get Email Status|Per Subscription|60|50|
+
+### Size Limits
+
+| **Name** | Limit |
+|--|--|
+|Number of recipients in Email|50 |
+|Attachment size - per messages|10 MB |
+
+### Action to take
+This sandbox setup is to help developers to start building the application and gradually you can request to increase the sending volume as soon as the application is ready to go live. If you require sending a number of messages that exceeds the rate-limits, please submit a support request to increase to your desired sending limit.
+ ## Chat ### Size Limits
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
# Communication as Teams user - You can use Azure Communication Services and Graph API to integrate communication as Teams users into your products. Teams users can communicate with other people in and outside their organization. The benefits for enterprises are: - No requirement to download Teams desktop, mobile or web clients for Teams users - Teams users don't lose context by switching between applications for day-to-day work and Teams client for communication
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
# Teams interoperability
-> [!IMPORTANT]
-> Teams external users interoperability for Teams meetings is now generally available to all Communication Services applications and Teams organizations.
->
-> Support for Teams users in Azure Communication Services SDK is in public preview and available to Web-based applications.
->
-> Preview APIs and SDKs are provided without a service-level agreement and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Azure Communication Services can be used to build custom applications and experiences that enable interaction with Microsoft Teams users over voice, video, chat, and screen sharing. The [Communication Services UI Library](ui-library/ui-library-overview.md) provides customizable, production-ready UI components that can be easily added to these applications. The following video demonstrates some of the capabilities of Teams interoperability: <br>
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
# Quickstart: Set up and manage access tokens for Teams users - In this quickstart, you'll build a .NET console application to authenticate a Microsoft 365 user by using the Microsoft Authentication Library (MSAL) and retrieving a Microsoft Azure Active Directory (Azure AD) user token. You'll then exchange that token for an access token of Teams user with the Azure Communication Services Identity SDK. The access token for Teams user can then be used by the Communication Services Calling SDK to integrate calling capability as Teams user. > [!NOTE]
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
# QuickStart: Add 1:1 video calling as a Teams user to your application - [!INCLUDE [Video calling with JavaScript](./includes/custom-teams-endpoint/voice-video-calling-cte-javascript.md)] ## Clean up resources
confidential-computing Application Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/application-development.md
Title: Azure confidential computing development tools description: Use tools and libraries to develop applications for confidential computing on Intel SGX -+ Last updated 11/01/2021-+
confidential-computing Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/attestation.md
Title: Attestation for SGX enclaves description: You can use attestation to verify that your Azure confidential computing SGX enclave is secure. -+ Last updated 12/20/2021-+
confidential-computing Confidential Computing Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-deployment-models.md
Title: Choose Between Deployment Models description: Choose Between Deployment Models-+ Last updated 11/04/2021-+
confidential-computing Confidential Computing Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-enclaves.md
Title: Build with SGX enclaves - Azure Virtual Machines description: Learn about Intel SGX hardware to enable your confidential computing workloads.-+ Last updated 11/01/2021-+
confidential-computing Confidential Computing Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-solutions.md
Title: Building Azure confidential solutions description: Learn how to build solutions on Azure confidential computing-+ Last updated 11/01/2021-+
confidential-computing Enclave Development Oss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-development-oss.md
Title: Develop application enclaves with open-source solutions in Azure Confidential Computing description: Learn how to use tools to develop Intel SGX applications for Azure confidential computing.-+ Last updated 11/01/2021-+
confidential-computing How To Fortanix Confidential Computing Manager Node Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-fortanix-confidential-computing-manager-node-agent.md
Title: Run an app with Fortanix Confidential Computing Manager description: Learn how to use Fortanix Confidential Computing Manager to convert your containerized images. -+ Last updated 03/24/2021-+ # Run an application by using Fortanix Confidential Computing Manager
confidential-computing How To Fortanix Confidential Computing Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-fortanix-confidential-computing-manager.md
Title: Fortanix Confidential Computing Manager in an Azure managed application description: Learn how to deploy Fortanix Confidential Computing Manager (CCM) in a managed application in the Azure portal.-+ Last updated 02/03/2021-+
confidential-computing Overview Azure Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview-azure-products.md
Title: Azure confidential computing products description: Learn about all the confidential computing services that Azure provides-+ Last updated 11/04/2021-+
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md
Title: Azure Confidential Computing Overview description: Overview of Azure Confidential (ACC) Computing -+ Last updated 11/01/2021-+
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-marketplace.md
Title: Quickstart - Create Intel SGX VM in the Azure Marketplace description: Get started with your deployments by learning how to quickly create an Intel SGX VM with Marketplace.-+ Last updated 11/01/2021-+
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
Title: Quickstart - Create Intel SGX VM in the Azure Portal description: Get started with your deployments by learning how to quickly create an Intel SGX VM in the Azure Portal-+ Last updated 11/1/2021-+
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md
Title: Common Azure confidential computing scenarios and use cases description: Understand how to use confidential computing in your scenario. -+ Last updated 11/04/2021-+ # Use cases and scenarios
confidential-computing Virtual Machine Solutions Sgx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-sgx.md
Title: Deploy Intel SGX virtual machines description: Learn about using Intel SGX virtual machines (VMs) in Azure confidential computing.-+ Last updated 12/20/2021-+
connectors Connectors Create Api Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sendgrid.md
- Title: Connect to SendGrid from Azure Logic Apps
-description: Automate tasks and workflows that send emails and manage mailing lists in SendGrid using Azure Logic Apps.
--- Previously updated : 08/24/2018
-tags: connectors
--
-# Connect to SendGrid from Azure Logic Apps
-
-With Azure Logic Apps and the SendGrid connector,
-you can create automated tasks and workflows that
-send emails and manage your recipient lists,
-for example:
-
-* Send email.
-* Add recipients to lists.
-* Get, add, and manage global suppression.
-
-You can use SendGrid actions in your logic apps to perform these tasks.
-You can also have other actions use the output from SendGrid actions.
-
-This connector provides only actions, so to start your logic app,
-use a separate trigger, such as a **Recurrence** trigger.
-For example, if you regularly add recipients to your lists,
-you can send email about recipients and lists using the
-Office 365 Outlook connector or Outlook.com connector.
-If you're new to logic apps, review
-[What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* A [SendGrid account](https://www.sendgrid.com/)
-and a [SendGrid API key](https://sendgrid.com/docs/ui/account-and-settings/api-keys/)
-
- Your API key authorizes your logic app to create
- a connection and access your SendGrid account.
-
-* Basic knowledge about
-[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app where you want to access your SendGrid account.
-To use a SendGrid action, start your logic app with another trigger,
-for example, the **Recurrence** trigger.
-
-## Connect to SendGrid
--
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
-
-1. Choose a path:
-
- * Under the last step where you want to add an action,
- choose **New step**.
-
- -or-
-
- * Between the steps where you want to add an action,
- move your pointer over the arrow between steps.
- Choose the plus sign (**+**) that appears,
- and then select **Add an action**.
-
-1. In the search box, enter "sendgrid" as your filter.
-Under the actions list, select the action you want.
-
-1. Provide a name for your connection.
-
-1. Enter your SendGrid API key,
-and then choose **Create**.
-
-1. Provide the necessary details for your selected action
-and continue building your logic app's workflow.
-
-## Connector reference
-
-For technical details about triggers, actions, and limits, which are
-described by the connector's OpenAPI (formerly Swagger) description,
-review the connector's [reference page](/connectors/sendgrid/).
-
-## Get support
-
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
cosmos-db Automated Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md
Title: Automated performance, cost, security recommendations for Azure Cosmos DB description: Learn how to view customized performance, cost, security, and other recommendations for Azure Cosmos DB based on your workload patterns.--++ Last updated 08/26/2021
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/compliance.md
Title: Azure Cosmos DB compliance description: This article describes compliance coverage for Azure Cosmos DB.--++ Last updated 09/11/2021
cosmos-db Database Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-encryption-at-rest.md
Title: Encryption at rest in Azure Cosmos DB description: Learn how Azure Cosmos DB provides encryption of data at rest and how it is implemented.--++ Last updated 10/26/2021
cosmos-db Database Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md
Title: Database security - Azure Cosmos DB description: Learn how Azure Cosmos DB provides database protection and data security for your data.--++ Last updated 07/18/2022
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-always-encrypted.md
description: Learn how to use client-side encryption with Always Encrypted for A
Last updated 04/04/2022--++ # Use client-side encryption with Always Encrypted for Azure Cosmos DB
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-firewall.md
description: Learn how to configure IP access control policies for firewall supp
Last updated 02/18/2022--++
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-private-endpoints.md
Title: Configure Azure Private Link for an Azure Cosmos account description: Learn how to set up Azure Private Link to access an Azure Cosmos account by using a private IP address in a virtual network. -+ Last updated 06/08/2021-+
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
Title: Configure virtual network based access for an Azure Cosmos account description: This document describes the steps required to set up a virtual network service endpoint for Azure Cosmos DB. -+ Last updated 07/07/2021-+
cosmos-db How To Define Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-define-unique-keys.md
Title: Define unique keys for an Azure Cosmos container description: Learn how to define unique keys for an Azure Cosmos container using Azure portal, PowerShell, .NET, Java, and various other SDKs. -+ Last updated 12/02/2019-+
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
Title: Configure customer-managed keys for your Azure Cosmos DB account description: Learn how to configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault-+ Last updated 07/20/2022-+ ms.devlang: azurecli
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md
Title: Configure managed identities with Azure AD for your Azure Cosmos DB account description: Learn how to configure managed identities with Azure Active Directory for your Azure Cosmos DB account-+ Last updated 10/15/2021-+ # Configure managed identities with Azure Active Directory for your Azure Cosmos DB account
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Title: Configure role-based access control for your Azure Cosmos DB account with Azure AD description: Learn how to configure role-based access control with Azure Active Directory for your Azure Cosmos DB account-+ Last updated 02/16/2022-+
cosmos-db Limit Total Account Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/limit-total-account-throughput.md
Title: Limit the total throughput provisioned on your Azure Cosmos DB account description: Learn how to limit the total throughput provisioned on your Azure Cosmos DB account-+ Last updated 03/31/2022-+
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
description: Learn how Azure Cosmos DB provides database protection with Active
Last updated 05/11/2022--++
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
Title: Learn how to secure access to data in Azure Cosmos DB description: Learn about access control concepts in Azure Cosmos DB, including primary keys, read-only keys, users, and permissions.--++
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
Last updated 06/21/2022--++ # Microsoft Defender for Azure Cosmos DB
cosmos-db How To Model Partition Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-model-partition-example.md
Title: Model and partition data on Azure Cosmos DB with a real-world example description: Learn how to model and partition a real-world example using the Azure Cosmos DB Core API-+ Last updated 08/26/2021-+ ms.devlang: javascript
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
ms.devlang: dotnet Previously updated : 06/24/2022 Last updated : 08/22/2022 # Quickstart: Azure Cosmos DB Table API for .NET+ [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)] This quickstart shows how to get started with the Azure Cosmos DB Table API from a .NET application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. You'll learn how to create tables, rows, and perform basic tasks within your Cosmos DB resource using the [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/).
This quickstart shows how to get started with the Azure Cosmos DB Table API from
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
-* [.NET 6.0](https://dotnet.microsoft.com/en-us/download)
+* [.NET 6.0](https://dotnet.microsoft.com/download)
* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/) ### Prerequisite check
This quickstart shows how to get started with the Azure Cosmos DB Table API from
## Setting up
-This section walks you through how to create an Azure Cosmos account and set up a project that uses the Table API NuGet packages.
+This section walks you through how to create an Azure Cosmos account and set up a project that uses the Table API NuGet packages.
### Create an Azure Cosmos DB account
This quickstart will create a single Azure Cosmos DB account using the Table API
### Create a new .NET app
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
```console
-dotnet new console -output <app-name>
+dotnet new console --output <app-name>
``` ### Install the NuGet package
The sample code described in this article creates a table named ``adventureworks
You'll use the following Table API classes to interact with these resources: -- [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) - This class provides methods to perform service level operations with Azure Cosmos DB Table API.-- [``TableClient``](/dotnet/api/azure.data.tables.tableclient) - This class allows you to interact with tables hosted in the Azure Cosmos DB table API.-- [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) - This class is a reference to a row in a table that allows you to manage properties and column data.
+* [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) - This class provides methods to perform service level operations with Azure Cosmos DB Table API.
+* [``TableClient``](/dotnet/api/azure.data.tables.tableclient) - This class allows you to interact with tables hosted in the Azure Cosmos DB table API.
+* [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) - This class is a reference to a row in a table that allows you to manage properties and column data.
### Authenticate the client
The easiest way to create a new item in a table is to create a class that implem
:::code language="csharp" source="~/azure-cosmos-tableapi-dotnet/001-quickstart/Product.cs" id="type" :::
-Create an item in the collection using the `Product` class by calling [``TableClient.AddEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.addentityasync).
+Create an item in the collection using the `Product` class by calling [``TableClient.AddEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.addentityasync).
:::code language="csharp" source="~/azure-cosmos-tableapi-dotnet/001-quickstart/Program.cs" id="create_object_add" :::
You can retrieve a specific item from a table using the [``TableEntity.GetEntity
### Query items
-After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using strongly typed `ITableEntity` models like the `Product` class.
+After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using typed `ITableEntity` models like the `Product` class.
> [!NOTE] > You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](./tutorial-query-table.md) tutorial.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
Title: Find request unit (RU) charge for a Table API queries in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for Table API queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge. -+ Last updated 10/14/2020-+ ms.devlang: csharp
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
You have two options to connect data factory to Microsoft Purview:
### Connect to Microsoft Purview account in Data Factory
-You need to have **Owner** or **Contributor** role on your data factory to connect to a Microsoft Purview account.
+You need to have **Owner** or **Contributor** role on your data factory to connect to a Microsoft Purview account. Your data factory needs to have system assigned managed identity enabled.
To establish the connection on Data Factory authoring UI:
databox-online Azure Stack Edge Gpu Manage Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-certificates.md
To delete a signing chain certificate from your Azure Stack Edge device, take th
1. Select the signing chain certificate you want to delete. Then select **Delete**.
- [ ![Screenshot of the Certificates blade of the local Web UI of an Azure Stack Edge device. The Delete option for the signing certificates is highlighted.](media/azure-stack-edge-gpu-manage-certificates/delete-signing-certificate-01.png) ](media/azure-stack-edge-gpu-manage-certificates/delete-signing-certificate-01.png)
+ [ ![Screenshot of the Certificates blade of the local Web UI of an Azure Stack Edge device. The Delete option for the signing certificates is highlighted.](media/azure-stack-edge-gpu-manage-certificates/delete-signing-certificate-01.png) ](media/azure-stack-edge-gpu-manage-certificates/delete-signing-certificate-01.png#lightbox)
1. On the **Delete certificate** pane, verify the certificate's thumbprint, and then select **Delete**. Certificate deletion can't be reversed.
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
To onboard a management group and all its subscriptions:
The remediation task will then enable Defender for Cloud, for free, on the non-compliant subscriptions. > [!IMPORTANT]
-> The policy definition will only enable Defender for Cloud on **existing** subscriptions. To register newly created subscriptions, open the compliance tab, select the relevant non-compliant subscriptions, and create a remediation task.Repeat this step when you have one or more new subscriptions you want to monitor with Defender for Cloud.
+> The policy definition will only enable Defender for Cloud on **existing** subscriptions. To register newly created subscriptions, open the compliance tab, select the relevant non-compliant subscriptions, and create a remediation task. Repeat this step when you have one or more new subscriptions you want to monitor with Defender for Cloud.
## Optional modifications
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
+| 22.2.5 | 08/2022 | 04/2023 |
| 22.2.4 | 07/2022 <br> There's a known compatibility issue with Hyper-V, please use version 22.1.7 | 04/2023 | | 22.2.3 | 07/2022 <br> There's a known compatibility issue with Hyper-V, please use version 22.1.7 | 04/2023 | | 22.1.7 | 07/2022 | 04/2023 |
For more information, see the [Microsoft Security Development Lifecycle practice
## August 2022
+- **Sensor software version 22.2.5**: Minor version with stability improvements
- [New alert columns with timestamp data](#new-alert-columns-with-timestamp-data) - [Sensor health from the Azure portal (Public preview)](#sensor-health-from-the-azure-portal-public-preview)
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
When the twin is created successfully, the CLI output from the command should lo
In this section, you'll create an Azure function to access Azure Digital Twins and update twins based on IoT telemetry events that it receives. Follow the steps below to create and publish the function.
-1. First, create a new Azure Functions project.
+1. First, create a new Azure Functions project of Event Grid trigger type.
You can do this using **Visual Studio** (for instructions, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project)), **Visual Studio Code** (for instructions, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#create-an-azure-functions-project)), or the **Azure CLI** (for instructions, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#create-a-local-function-project)).
dns Tutorial Alias Tm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/tutorial-alias-tm.md
Create an alias record that points to the Traffic Manager profile.
1. From a web browser, browse to `contoso.com` or your apex domain name. You see the IIS default page with `Hello World from Web-01`. The Traffic Manager directed traffic to **Web-01** IIS web server because it has the highest priority. Close the web browser and shut down **Web-01** virtual machine. Wait a few minutes for the virtual machine to completely shut down. 1. Open a new web browser, and browse again to `contoso.com` or your apex domain name.
-1. You should see the IIS default page with `Hello World from Web-01`. The Traffic Manager handled the situation and directed traffic to the second IIS server after shutting down the first server that has the highest priority.
+1. You should see the IIS default page with `Hello World from Web-02`. The Traffic Manager handled the situation and directed traffic to the second IIS server after shutting down the first server that has the highest priority.
## Clean up resources
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct and ExpressRoute circuit(s) in different subscriptions or Az
```powershell Add-AzExpressRoutePortAuthorization -Name $Name -ExpressRoutePort $ERPort
- Set-AzExpressRoutePort -ExpressRoutePort $ERPort
``` Sample output:
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
value. The value may be similar to: `thrift://iqgiro.rekufuk2y2cezcbowjkbwfnyvd.
| Configuration | Value | |-|-|
- |`spark.datasource.hive.warehouse.load.staging.dir`|`wasbs://STORAGE_CONTAINER_NAME@STORAGE_ACCOUNT_NAME.blob.core.windows.net/tmp`. <br> Set to a suitable HDFS-compatible staging directory. If you have two different clusters, the staging directory should be a folder in the staging directory of the LLAP cluster's storage account so that HiveServer2 has access to it. Replace `STORAGE_ACCOUNT_NAME` with the name of the storage account being used by the cluster, and `STORAGE_CONTAINER_NAME` with the name of the storage container. |
+ |`spark.datasource.hive.warehouse.load.staging.dir`| If you are using ADLS Gen2 Storage Account, use `abfss://STORAGE_CONTAINER_NAME@STORAGE_ACCOUNT_NAME.dfs.core.windows.net/tmp`<br>If you are using Azure Blob Storage Account, use `wasbs://STORAGE_CONTAINER_NAME@STORAGE_ACCOUNT_NAME.blob.core.windows.net/tmp`. <br> Set to a suitable HDFS-compatible staging directory. If you have two different clusters, the staging directory should be a folder in the staging directory of the LLAP cluster's storage account so that HiveServer2 has access to it. Replace `STORAGE_ACCOUNT_NAME` with the name of the storage account being used by the cluster, and `STORAGE_CONTAINER_NAME` with the name of the storage container. |
|`spark.sql.hive.hiveserver2.jdbc.url`| The value you obtained earlier from **HiveServer2 Interactive JDBC URL** | |`spark.datasource.hive.warehouse.metastoreUri`| The value you obtained earlier from **hive.metastore.uris**. | |`spark.security.credentials.hiveserver2.enabled`|`true` for YARN cluster mode and `false` for YARN client mode. |
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Title: Overview of FHIR search in Azure Health Data Services description: This article describes an overview of FHIR search that is implemented in Azure Health Data Services-+ Last updated 06/06/2022-+ # Overview of FHIR search
-The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
+The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder `{{FHIR_URL}}` for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
import-export Storage Import Export View Drive Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-view-drive-status.md
If you created your import or export job in Azure Data Box (the Preview experien
A list of Import/Export jobs appears on the page.
- [ ![Screenshot of Data Box resources in the Azure portal filtered to Import Export jobs. The job name, transfer type, status, and model are highlighted.](./media/storage-import-export-view-drive-status/preview-jobs-list.png) ](./media/storage-import-export-view-drive-status/preview-jobs-list.png)
+ [ ![Screenshot of Data Box resources in the Azure portal filtered to Import Export jobs. The job name, transfer type, status, and model are highlighted.](./media/storage-import-export-view-drive-status/preview-jobs-list.png) ](./media/storage-import-export-view-drive-status/preview-jobs-list.png#lightbox)
4. Select a job name to view job details.
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
A DTDL model can be a _no-component_ or a _multi-component_ model:
- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model. > [!TIP]
-> You can [export a device model](howto-set-up-template.md#interfaces-and-components) from an IoT Central device template as a DTDL v2 file.
+> You can [import and export a complete device model or individual interface](howto-set-up-template.md#interfaces-and-components) from an IoT Central device template as a DTDL v2 file.
To learn more about device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
To lean more about the DPS payload, see the sample code used in the [Tutorial: C
## Device models
-A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines the device model into a device template, or use the web UI in IoT Central to create or edit a device model.
+A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines a complete device model or individual interface into a device template, or use the web UI in IoT Central to create or edit a device model.
To learn more about editing a device model, see [Edit an existing device template](howto-edit-device-template.md)
-A solution developer can also export a JSON file that contains the device model. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
+A solution developer can also export a JSON file from the device template that contains a complete device model or individual interface. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md).
You can choose queue commands if a device is currently offline by enabling the *
Offline commands are one-way notifications to the device from your solution. Offline commands can have request parameters but don't return a response. > [!NOTE]
-> This option is only available in the IoT Central web UI. This setting isn't included if you export a model or interface from the device template.
+> Offline commands are marked as `durable` if you export the model as DTDL.
## Cloud properties
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
When the device has finished processing the request, it should send a property t
In the IoT Central web UI, you can select the **Queue if offline** option for a command. Offline commands are one-way notifications to the device from your solution that are delivered as soon as a device connects. Offline commands can have a request parameter but don't return a response.
-The **Queue if offline** setting isn't included if you export a model or interface from the device template. You can't tell by looking at an exported model or interface JSON that a command is an offline command.
+Offline commands are marked as `durable` if you export the model as DTDL.
Offline commands use [IoT Hub cloud-to-device messages](../../iot-hub/iot-hub-devguide-messages-c2d.md) to send the command and payload to the device.
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
To reassign an organization to a new parent, select **Edit** and choose a new pa
To delete an organization, you must delete or move to another organization any associated items such as dashboards, devices, users, device groups, and jobs. > [!TIP]
-> You can also use the REST API to [create and manage organizations](/rest/api/iotcentral/1.2-previewdataplane/organizations).
+> You can also use the REST API to [create and manage organizations](/rest/api/iotcentral/2022-07-31dataplane/organizations).
## Assign devices
The following limits apply to organizations:
## Next steps Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is to learn how to [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).-
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
instanceOf: .device.templateId,
properties: .device.properties.reported | map({ key: .name, value: .value }) | from_entries ```
-Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get).
+Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/2022-07-31dataplane/device-templates/get).
### Destination migration considerations
This example snapshot shows a message that contains device and properties data i
If you have an existing data export in your preview application with the *Devices* and *Device templates* streams turned on, update your export by **30 June 2020**. This requirement applies to exports to Azure Blob storage, Azure Event Hubs, and Azure Service Bus.
-Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2022-05-31dataplane/devices/get), [device property](/rest/api/iotcentral/2022-05-31dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/2022-05-31dataplane/device-templates/get) objects in the IoT Central public API.
+Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2022-07-31dataplane/devices/get), [device property](/rest/api/iotcentral/2022-07-31dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/2022-07-31dataplane/device-templates/get) objects in the IoT Central public API.
For **Devices**, notable differences between the old data format and the new data format include: - `@id` for device is removed, `deviceId` is renamed to `id`
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The request body has some required fields:
* `capabilityModel` : Every device template has a capability model. A relationship is established between each module capability model and a device model. A capability model implements one or more module interfaces. > [!TIP]
-> The device template JSON is not a standard DTDL document. The device template JSON includes IoT Central specific data such as cloud property definitions and display units. You can use the device template JSON format to import and export device templates in IoT Central by using the REST API and the CLI.
+> The device template JSON is not a standard DTDL document. The device template JSON includes IoT Central specific data such as cloud property definitions and display units. You can use the device template JSON format to import and export device templates in IoT Central by using the REST API, the CLI, and the UI.
There are some optional fields you can use to add more details to the capability model, such as display name and description.
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
The response to this request looks like the following example. The role value id
} ```
-You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/2022-05-31dataplane/users/create#add-or-update-a-service-principal-user).
+You can also add a service principal user which is useful if you need to use service principal authentication for REST API calls. To learn more, see [Add or update a service principal user](/rest/api/iotcentral/2022-07-31dataplane/users/create#add-or-update-a-service-principal-user).
### Change the role of a user
DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-v
## Next steps
-Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage organizations.](howto-manage-organizations-with-rest-api.md)
+Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage organizations.](howto-manage-organizations-with-rest-api.md)
iot-central Howto Migrate To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-migrate-to-iot-hub.md
npm install
npm start ```
-After the migrator app starts, navigate to [http://localhost:3000](http://localhost:3000) to view the tool.
+After the migrator app starts, navigate to `http://localhost:3000` to view the tool.
## Migrate devices
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
To find a device template ID, navigate to the **Devices** page in your IoT Centr
:::image type="content" source="media/howto-query-with-rest-api/show-device-template-id.png" alt-text="Screenshot that shows how to find the device template ID in the page URL.":::
-You can also use the [Devices - Get](/rest/api/iotcentral/1.2-previewdataplane/devices/get) REST API call to get the device template ID for a device.
+You can also use the [Devices - Get](/rest/api/iotcentral/2022-07-31dataplane/devices/get) REST API call to get the device template ID for a device.
## WHERE clause
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Cloud-to-device messages:
- Require the device to implement a message handler to process the cloud-to-device message. > [!NOTE]
-> This option is only available in the IoT Central web UI. This setting isn't included if you export a model or component from the device template.
+> Offline commands are marked as `durable` if you export the model as DTDL.
## Cloud properties
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
The following snippet shows the JSON representation of the command in the device
``` > [!TIP]
-> You can export a device model from the device template page.
+> You can export a device model or interface from the device template page.
You can relate this command definition to the screenshot of the UI using the following fields:
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
This article introduces you to Azure IoT Central REST API. Use the API to create
The REST API operations are grouped into the: -- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-05-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/1.2-previewdataplane/api-tokens) versions of the data plane API.
+- *Data plane* operations that let you work with resources inside IoT Central applications. Data plane operations let you automate tasks that can also be completed using the IoT Central UI. Currently, there are [generally available](/rest/api/iotcentral/2022-07-31dataplane/api-tokens) and [preview](/rest/api/iotcentral/2022-06-30-previewdataplane/api-tokens) versions of the data plane API.
- *Control plane* operations that let you work with the Azure resources associated with IoT Central applications. Control plane operations let you automate tasks that can also be completed in the Azure portal. ## Data plane operations
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
# Mandatory fields. See more on aka.ms/skyeye/meta. Title: Tutorial develop C module for Linux - Azure IoT Edge | Microsoft Docs
+ Title: Tutorial - develop C module for Linux - Azure IoT Edge | Microsoft Docs
description: This tutorial shows you how to create an IoT Edge module with C code and deploy it to a Linux device running IoT Edge
Use the following table to understand your options for developing and deploying
| - | | - | | **Linux AMD64** | ![Use VS Code for C modules on Linux AMD64](./medi64](./media/tutorial-c-module/green-check.png) | | **Linux ARM32** | ![Use VS Code for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) |
+| **Linux ARM64** | ![Use VS Code for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
Before beginning this tutorial, you should have gone through the previous tutori
* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools). * [Docker CE](https://docs.docker.com/install/) configured to run Linux containers.
-To develop an IoT Edge module in C, install the following additional prerequisites on your development machine:
+To develop an IoT Edge module in C, install the following prerequisites on your development machine:
* [C/C++ extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools) for Visual Studio Code.
-Installing the Azure IoT C SDK is not required for this tutorial, but can provide helpful functionality like intellisense and reading program definitions. For installation information, see [Azure IoT C SDKs and Libraries](https://github.com/Azure/azure-iot-sdk-c).
+Installing the Azure IoT C SDK isn't required for this tutorial, but can provide helpful functionality like intellisense and reading program definitions. For installation information, see [Azure IoT C SDKs and Libraries](https://github.com/Azure/azure-iot-sdk-c).
## Create a module project
Currently, Visual Studio Code can develop C modules for Linux AMD64 and Linux AR
### Update the module with custom code
-The default module code receives messages on an input queue and passes them along through an output queue. Let's add some additional code so that the module processes the messages at the edge before forwarding them to IoT Hub. Update the module so that it analyzes the temperature data in each message, and only sends the message to IoT Hub if the temperature exceeds a certain threshold.
+The default module code receives messages on an input queue and passes them along through an output queue. Let's add more code so the module processes messages at the edge before forwarding them to IoT Hub. Update the module so that it analyzes the temperature data in each message, and only sends the message to IoT Hub if the temperature exceeds a certain threshold.
1. The data from the sensor in this scenario comes in JSON format. To filter messages in JSON format, import a JSON library for C. This tutorial uses Parson.
Make sure that your IoT Edge device is up and running.
2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
+3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Don't use the deployment.template.json file, as that file is only a template.
4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **CModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
We used the CModule module twin in the deployment manifest to set the temperatur
## Clean up resources
-If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
+If you continue to the next recommended article, you can keep your resources and configurations and reuse them. You can also keep using the same IoT Edge device as a test device.
Otherwise, you can delete the local configurations and the Azure resources that you used in this article to avoid charges.
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
Use the following table to understand your options for developing and deploying
| - | | | | **Linux AMD64** | ![Use VS Code for Java modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | | | **Linux ARM32** | ![Use VS Code for Java modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use VS Code for Java modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place:
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
Use the following table to understand your options for developing and deploying
| - | | | | **Linux AMD64** | ![Use VS Code for Python modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | | | **Linux ARM32** | ![Use VS Code for Python modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use VS Code for Python modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
description: Get started with Device Update for Azure RTOS.
Last updated 3/18/2021-+
-# Tutorial: Device Update for Azure IoT Hub using Azure RTOS
+# Device Update for Azure IoT Hub using Azure RTOS
-This tutorial shows you how to create the Device Update for Azure IoT Hub agent in Azure RTOS NetX Duo. It also provides simple APIs for developers to integrate the Device Update capability in their application. Explore [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get-started guides to learn how to configure, build, and deploy over-the-air updates to the devices.
-
-In this tutorial, you'll learn how to:
-> [!div class="checklist"]
-> * Get started.
-> * Tag your device.
-> * Create a device group.
-> * Deploy an image update.
-> * Monitor the update deployment.
+This article shows you how to create the Device Update for Azure IoT Hub agent in Azure RTOS NetX Duo. It also provides simple APIs for developers to integrate the Device Update capability in their application. Explore [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get-started guides to learn how to configure, build, and deploy over-the-air updates to the devices.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Learn more about [Azure RTOS](/azure/rtos/).
1. On the left pane, under **IoT Devices**, find your IoT device and go to the device twin. 1. In the device twin, delete any existing Device Update tag values by setting them to null. 1. Add a new Device Update tag value to the root JSON object, as shown:
-
+ ```JSON "tags": { "ADUGroup": "<CustomTagValue>"
Learn more about [Azure RTOS](/azure/rtos/).
You've now completed a successful end-to-end image update by using Device Update for IoT Hub on an Azure RTOS embedded device.
-## Clean up resources
-
-When no longer needed, clean up your device update account, instance, IoT hub, and IoT device.
- ## Next steps To learn more about Azure RTOS and how it works with IoT Hub, see the [Azure RTOS webpage](https://azure.com/rtos).
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
This region doesn't affect how the traffic will be routed. If a home region goes
* North Europe * East Asia * US Gov Virginia
+* UK West
+* Uk South
> [!NOTE] > You can only deploy your cross-region load balancer or Public IP in Global tier in one of the regions above.
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
-# Deploy an IPv6 dual stack application using Standard Internal Load Balancer in Azure - PowerShell (Preview)
+# Deploy an IPv6 dual stack application using Standard Internal Load Balancer in Azure - PowerShell
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network and subnet, a Standard Internal Load Balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, network security group, and public IPs.
logic-apps Add Artifacts Integration Service Environment Ise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-artifacts-integration-service-environment-ise.md
ms.suite: integration Previously updated : 02/28/2021 Last updated : 08/20/2022 # Add resources to your integration service environment (ISE) in Azure Logic Apps
logic-apps Block Connections Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/block-connections-connectors.md
ms.suite: integration Previously updated : 05/18/2022 Last updated : 08/22/2022 # Block connector usage in Azure Logic Apps + If your organization doesn't permit connecting to restricted or unapproved resources using their [managed connectors](../connectors/managed.md) in Azure Logic Apps, you can block the capability to create and use those connections in logic app workflows. With [Azure Policy](../governance/policy/overview.md), you can define and enforce [policies](../governance/policy/overview.md#policy-definition) that prevent creating or using connections for connectors that you want to block. For example, for security reasons, you might want to block connections to specific social media platforms or other services and systems. This article shows how to set up a policy that blocks specific connections by using the Azure portal, but you can create policy definitions in other ways. For example, you can use the Azure REST API, Azure PowerShell, Azure CLI, and Azure Resource Manager templates. For more information, see [Tutorial: Create and manage policies to enforce compliance](../governance/policy/tutorials/create-and-manage.md).
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
Last updated 08/20/2022
# Schedules for recurring triggers in Azure Logic Apps workflows + Azure Logic Apps helps you create and run automated recurring workflows on a schedule. By creating a logic app workflow that starts with a built-in Recurrence trigger or Sliding Window trigger, which are Schedule-type triggers, you can run tasks immediately, at a later time, or on a recurring interval. You can call services inside and outside Azure, such as HTTP or HTTPS endpoints, post messages to Azure services such as Azure Storage and Azure Service Bus, or get files uploaded to a file share. With the Recurrence trigger, you can also set up complex schedules and advanced recurrences for running tasks. To learn more about the built-in Schedule triggers and actions, see [Schedule triggers](#schedule-triggers) and [Schedule actions](#schedule-actions). > [!NOTE]
logic-apps Connect Virtual Network Vnet Isolated Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md
ms.suite: integration Previously updated : 05/16/2021 Last updated : 08/20/2022 # Access to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE) + Sometimes, your logic app workflows need access to protected resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an Azure virtual network. To directly access these resources from workflows that usually run in multi-tenant Azure Logic Apps, you can create and run your logic apps in an *integration service environment* (ISE) instead. An ISE is actually an instance of Azure Logic Apps that runs separately on dedicated resources, apart from the global multi-tenant Azure environment, and doesn't [store, process, or replicate data outside the region where you deploy the ISE](https://azure.microsoft.com/global-infrastructure/data-residency#select-geography). For example, some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, partner services, or customer services that are hosted on Azure. If your logic app workflows require access to virtual networks that use private endpoints, you have these options:
logic-apps Connect Virtual Network Vnet Isolated Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md
Last updated 08/20/2022
# Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE) + For scenarios where Consumption logic app resources and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is an environment that uses dedicated storage and other resources that are kept separate from the "global" multi-tenant Azure Logic Apps. This separation also reduces any impact that other Azure tenants might have on your apps' performance. An ISE also provides you with your own static IP addresses. These IP addresses are separate from the static IP addresses that are shared by the logic apps in the public, multi-tenant service. When you create an ISE, Azure *injects* that ISE into your Azure virtual network, which then deploys Azure Logic Apps into your virtual network. When you create a logic app or integration account, select your ISE as their location. Your logic app or integration account can then directly access resources, such as virtual machines (VMs), servers, systems, and services, in your virtual network.
logic-apps Connect Virtual Network Vnet Set Up Single Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-set-up-single-ip-address.md
ms.suite: integration Previously updated : 05/06/2020 Last updated : 08/20/2022 # Set up a single IP address for one or more integration service environments in Azure Logic Apps + When you work with Azure Logic Apps, you can set up an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) for hosting logic apps that need access to resources in an [Azure virtual network](../virtual-network/virtual-networks-overview.md). When you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then have all the ISE instances in your virtual network use a single, public, static, and predictable IP address to communicate with the destination systems that you want. That way, you don't have to set up additional firewall openings at your destination systems for each ISE. This topic shows how to route outbound traffic through an Azure Firewall, but you can apply similar concepts to a network virtual appliance such as a third-party firewall from the Azure Marketplace. While this topic focuses on setup for multiple ISE instances, you can also use this approach for a single ISE when your scenario requires limiting the number of IP addresses that need access. Consider whether the additional costs for the firewall or virtual network appliance make sense for your scenario. Learn more about [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/).
logic-apps Create Integration Service Environment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-integration-service-environment-rest-api.md
ms.suite: integration Previously updated : 02/03/2021 Last updated : 08/20/2022 # Create an integration service environment (ISE) by using the Logic Apps REST API + For scenarios where your logic apps and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), you can create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) by using the Logic Apps REST API. To learn more about ISEs, see [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md). This article shows you how to create an ISE by using the Logic Apps REST API in general. Optionally, you can also enable a [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) on your ISE, but only by using the Logic Apps REST API at this time. This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 07/30/2022 Last updated : 08/22/2022 # Authenticate access to Azure resources with managed identities in Azure Logic Apps + In logic app workflows, some triggers and actions support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate access to resources protected by Azure Active Directory (Azure AD). This identity was previously known as a *Managed Service Identity (MSI)*. When you enable your logic app resource to use a managed identity for authentication, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information. Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md), but the following differences exist between these identity types:
logic-apps Create Parameters Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-parameters-workflows.md
Last updated 08/20/2022
# Create cross-environment parameters for workflow inputs in Azure Logic Apps + In Azure Logic Apps, you can abstract values that might change in workflows across development, test, and production environments by defining *parameters*. When you use parameters rather than environment-specific variables, you can initially focus more on designing your workflows, and insert your environment-specific variables later. This article introduces how to create, use, and edit parameters for multi-tenant Consumption logic app workflows and for single-tenant Standard logic app workflows. You'll also learn how to manage environment variables.
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in the Azure portal + This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using the **Logic App (Standard)** resource type and the Azure portal. This resource type can host multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless). Also, workflows in the same logic app and tenant run in the same process as the redesigned Azure Logic Apps runtime, so they share the same resources and provide better performance. For more information about the single-tenant Azure Logic Apps offering, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md). While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in Visual Studio Code + This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. The logic app that you create with this extension is based on the **Logic App (Standard)** resource type, which provides the following capabilities: * You can locally run and test logic app workflows in the Visual Studio Code development environment.
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
ms.suite: integration Previously updated : 06/10/2022 Last updated : 08/22/2022 # As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows. # Custom connectors in Azure Logic Apps + Without writing any code, you can quickly create automated integration workflows when you use the prebuilt connector operations in Azure Logic Apps. A connector helps your workflows connect and access data, events, and actions across other apps, services, systems, protocols, and platforms. Each connector offers operations as triggers, actions, or both that you can add to your workflows. By using these operations, you expand the capabilities for your cloud apps and on-premises apps to work with new and existing data. Connectors in Azure Logic Apps are either *built in* or *managed*. A *built-in* connector runs natively on the Azure Logic Apps runtime, which means they're hosted in the same process as the runtime and provide higher throughput, low latency, and local connectivity. A *managed connector* is a proxy or a wrapper around an API, such as Office 365 or Salesforce, that helps the underlying service talk to Azure Logic Apps. Managed connectors are powered by the connector infrastructure in Azure and are deployed, hosted, run, and managed by Microsoft. You can choose from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) to use with your workflows in Azure Logic Apps.
logic-apps Customer Managed Keys Integration Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/customer-managed-keys-integration-service-environment.md
ms.suite: integration Previously updated : 01/20/2021 Last updated : 08/20/2022 # Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps
logic-apps Logic Apps Author Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-author-definitions.md
ms.suite: integration Previously updated : 01/01/2018 Last updated : 08/21/2022 # Create, edit, or extend JSON for logic app workflow definitions in Azure Logic Apps
Last updated 01/01/2018
When you create enterprise integration solutions with automated workflows in [Azure Logic Apps](../logic-apps/logic-apps-overview.md),
-the underlying logic app definitions use simple
+the underlying workflow definitions use simple
and declarative JavaScript Object Notation (JSON) along with the [Workflow Definition Language (WDL) schema](../logic-apps/logic-apps-workflow-definition-language.md) for their description and validation. These formats
-make logic app definitions easier to read and
+make workflow definitions easier to read and
understand without knowing much about code.
-When you want to automate creating and deploying logic apps,
-you can include logic app definitions as
+When you want to automate creating and deploying logic app resources,
+you can include workflow definitions as
[Azure resources](../azure-resource-manager/management/overview.md) inside [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md). To create, manage, and deploy logic apps, you can then use
To create, manage, and deploy logic apps, you can then use
[Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or the [Azure Logic Apps REST APIs](/rest/api/logic/).
-To work with logic app definitions in JSON,
+To work with workflow definitions in JSON,
open the Code View editor when working in the Azure portal or in Visual Studio, or copy the definition into any editor that you want.
-If you're new to logic apps, review
-[how to create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+If you're new to Azure Logic Apps, review
+[how to create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
> [!NOTE] > Some Azure Logic Apps capabilities, such as defining
-> parameters and multiple triggers in logic app definitions,
-> are available only in JSON, not the Logic Apps Designer.
+> parameters and multiple triggers in workflow definitions,
+> are available only in JSON, not the workflow designer.
> So for these tasks, you must work in Code View or another editor. ## Edit JSON - Azure portal
and then from the results, select your logic app.
select **Logic App Code View**. The Code View editor opens and shows
- your logic app definition in JSON format.
+ your workflow definition in JSON format.
## Edit JSON - Visual Studio
-Before you can work on your logic app definition
+Before you can work on your workflow definition
in Visual Studio, make sure that you've [installed the required tools](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md#prerequisites). To create a logic app with Visual Studio, review
or as Azure Resource Manager projects from Visual Studio.
or [Azure Resource Group](../azure-resource-manager/management/overview.md) project, that contains your logic app.
-2. Find and open your logic app's definition,
+2. Find and open your workflow definition,
which by default, appears in an [Resource Manager template](../azure-resource-manager/templates/overview.md), named **LogicApp.json**.
You can use and customize this template for
deployment to different environments. 3. Open the shortcut menu for your
-logic app definition and template.
+workflow definition and template.
Select **Open With Logic App Designer**. ![Open logic app in a Visual Studio solution](./media/logic-apps-author-definitions/open-logic-app-designer.png)
Select **Open With Logic App Designer**.
> [!TIP] > If you don't have this command in Visual Studio 2019, check that you have the latest updates for Visual Studio.
-4. At the bottom of the designer, choose **Code View**.
+4. At the bottom of the workflow designer, choose **Code View**.
The Code View editor opens and shows
- your logic app definition in JSON format.
+ your workflow definition in JSON format.
5. To return to designer view, at the bottom of the Code View editor,
Follow these general steps to *parameterize*, or define and use parameters for,
## Process strings with functions
-Logic Apps has various functions for working with strings.
+Azure Logic Apps has various functions for working with strings.
For example, suppose you want to pass a company name from an order to another system. However, you're not sure about proper handling for character encoding. You could perform base64 encoding on this string, but to avoid escapes in the URL, you can replace several characters instead. Also, you only need a substring for
-the company name because the first five characters are not used.
+the company name because the first five characters aren't used.
``` json {
logic-apps Logic Apps Batch Process Send Receive Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-batch-process-send-receive-messages.md
Title: Batch process messages as a group
description: Send and receive messages in groups between your workflows by using batch processing in Azure Logic Apps. ms.suite: integration-- Previously updated : 08/20/2022 Last updated : 08/21/2022 # Send, receive, and batch process messages in Azure Logic Apps
logic-apps Logic Apps Create Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-api-app.md
Title: Create web APIs & REST APIs for Azure Logic Apps
description: Create web APIs & REST APIs to call your APIs, services, or systems for system integrations in Azure Logic Apps ms.suite: integration-+ Previously updated : 05/26/2017 Last updated : 08/21/2022 # Create custom APIs you can call from Azure Logic Apps
Last updated 05/26/2017
Although Azure Logic Apps offers [hundreds of connectors](../connectors/apis-list.md) that you can use in logic app workflows, you might want to call APIs, systems, and services that aren't available as connectors.
-You can create your own APIs that provide actions and triggers to use in logic apps.
+You can create your own APIs that provide actions and triggers to use in workflows.
Here are other reasons why you might want to create your own APIs
-that you can call from logic app workflows:
+that you can call from workflows:
* Extend your current system integration and data integration workflows. * Help customers use your service to manage professional or personal tasks.
that you can call from logic app workflows:
Basically, connectors are web APIs that use REST for pluggable interfaces, [Swagger metadata format](https://swagger.io/specification/) for documentation, and JSON as their data exchange format. Because connectors are REST APIs
-that communicate through HTTP endpoints, you can use any language,
-like .NET, Java, Python, or Node.js, for building connectors.
+that communicate through HTTP endpoints, you can use any language to build connectors,
+such as .NET, Java, Python, or Node.js.
You can also host your APIs on [Azure App Service](../app-service/overview.md), a platform-as-a-service (PaaS) offering that provides one of the best, easiest, and most scalable ways for API hosting.
-For custom APIs to work with logic apps, your API can provide
+For custom APIs to work with logic app workflow, your API can provide
[*actions*](./logic-apps-overview.md#logic-app-concepts)
-that perform specific tasks in logic app workflows. Your API can also act as a
+that perform specific tasks in workflows. Your API can also act as a
[*trigger*](./logic-apps-overview.md#logic-app-concepts)
-that starts a logic app workflow when new data or an event meets a specified condition.
+that starts a workflow run when new data or an event meets a specified condition.
This topic describes common patterns that you can follow for building actions and triggers in your API, based on the behavior that you want your API to provide.
easy API hosting.
> consider deploying your APIs as API apps, > which can make your job easier when you build, host, and consume APIs > in the cloud and on premises. You don't have to change any code in your
-> APIs -- just deploy your code to an API app. For example, learn how to
+> APIs--just deploy your code to an API app. For example, learn how to
> build API apps created with these languages: > > * [ASP.NET](../app-service/quickstart-dotnetcore.md).
like custom APIs but also have these attributes:
* Registered as Logic Apps Connector resources in Azure. * Appear with icons alongside Microsoft-managed connectors in the Logic Apps Designer.
-* Available only to the connectors' authors and logic app users who have the same
+* Available only to the connectors' authors and logic app resource users who have the same
Azure Active Directory tenant and Azure subscription in the region where the logic apps are deployed.
You can also nominate registered connectors for Microsoft certification.
This process verifies that registered connectors meet the criteria for public use and makes those connectors available for users in Power Automate and Microsoft Power Apps.
-For more information about custom connectors, see
+For more information, review the following documentation:
* [Custom connectors overview](../logic-apps/custom-connector-overview.md) * [Create custom connectors from Web APIs](/connectors/custom-connectors/create-web-api-connector)
For logic apps to perform tasks, your custom API should provide
[*actions*](./logic-apps-overview.md#logic-app-concepts). Each operation in your API maps to an action. A basic action is a controller that accepts HTTP requests and returns HTTP responses.
-So for example, a logic app sends an HTTP request to your web app or API app.
-Your app then returns an HTTP response, along with content that the logic app can process.
+So for example, a workflow sends an HTTP request to your web app or API app.
+Your app then returns an HTTP response, along with content that the workflow can process.
For a standard action, you can write an HTTP request method in your API and describe that method in a Swagger file. You can then call your API directly
By default, responses must be returned within the [request timeout limit](./logi
![Standard action pattern](./media/logic-apps-create-api-app/standard-action.png) <a name="pattern-overview"></a>
-To make a logic app wait while your API finishes longer-running tasks,
+To make a workflow wait while your API finishes longer-running tasks,
your API can follow the [asynchronous polling pattern](#async-pattern) or the [asynchronous webhook pattern](#webhook-actions) described in this topic. For an analogy that helps you visualize these patterns' different behaviors,
To have your API perform tasks that could run longer than the
[request timeout limit](./logic-apps-limits-and-config.md), you can use the asynchronous polling pattern. This pattern has your API do work in a separate thread,
-but keep an active connection to the Logic Apps engine.
-That way, the logic app does not time out or continue with
+but keep an active connection to the Azure Logic Apps engine.
+That way, the workflow doesn't time out or continue with
the next step in the workflow before your API finishes working. Here's the general pattern: 1. Make sure that the engine knows that your API accepted the request and started working.
-2. When the engine makes subsequent requests for job status, let the engine know when your API finishes the task.
-3. Return relevant data to the engine so that the logic app workflow can continue.
+1. When the engine makes subsequent requests for job status, let the engine know when your API finishes the task.
+1. Return relevant data to the engine so that the workflow can continue.
<a name="bakery-polling-action"></a> Now apply the previous bakery analogy to the polling pattern,
This back-and-forth process continues until you call,
and the bakery tells you that your order is ready and delivers your cake. So let's map this polling pattern back. The bakery represents your custom API,
-while you, the cake customer, represent the Logic Apps engine.
+while you, the cake customer, represent the Azure Logic Apps engine.
When the engine calls your API with a request, your API confirms the request and responds with the time interval when the engine can check job status. The engine continues checking job status until your API responds
described from the API's perspective:
1. When your API gets an HTTP request to start work, immediately return an HTTP `202 ACCEPTED` response with the `location` header described later in this step.
-This response lets the Logic Apps engine know that your API got the request,
+This response lets the Azure Logic Apps engine know that your API got the request,
accepted the request payload (data input), and is now processing. The `202 ACCEPTED` response should include these headers: * *Required*: A `location` header that specifies the absolute path
- to a URL where the Logic Apps engine can check your API's job status
+ to a URL where the Azure Logic Apps engine can check your API's job status
* *Optional*: A `retry-after` header that specifies the number of seconds that the engine should wait before checking the `location` URL for job status.
accepted the request payload (data input), and is now processing.
By default, the engine checks every 20 seconds. To specify a different interval, include the `retry-after` header and the number of seconds until the next poll.
-2. After the specified time passes, the Logic Apps engine polls
+2. After the specified time passes, the Azure Logic Apps engine polls
the `location` URL to check job status. Your API should perform these checks and return these responses:
checks and return these responses:
but with the same headers as the original response. When your API follows this pattern, you don't have to do anything in the
-logic app workflow definition to continue checking job status.
+workflow definition to continue checking job status.
When the engine gets an HTTP `202 ACCEPTED` response and a
-valid `location` header, the engine respects the asynchronous pattern
+valid `location` header, the engine respects the asynchronous pattern,
and checks the `location` header until your API returns a non-202 response. > [!TIP]
and checks the `location` header until your API returns a non-202 response.
As an alternative, you can use the webhook pattern for long-running tasks and asynchronous processing.
-This pattern has the logic app pause and wait for a "callback"
+This pattern pauses the workflow and waits for a "callback"
from your API to finish processing before continuing workflow. This callback is an HTTP POST that sends a message to a URL when an event happens.
you give them your phone number so they can call you when the cake is done.
This time, the bakery tells you when your order is ready and delivers your cake. When we map this webhook pattern back, the bakery represents your custom API,
-while you, the cake customer, represent the Logic Apps engine.
+while you, the cake customer, represent the Azure Logic Apps engine.
The engine calls your API with a request and includes a "callback" URL. When the job is done, your API uses the URL to notify the engine and return data to your logic app, which then continues workflow.
and return data to your logic app, which then continues workflow.
For this pattern, set up two endpoints on your controller: `subscribe` and `unsubscribe` * `subscribe` endpoint: When execution reaches your API's action in the workflow,
-the Logic Apps engine calls the `subscribe` endpoint. This step causes the
-logic app to create a callback URL that your API stores and then wait for the
+the Azure Logic Apps engine calls the `subscribe` endpoint. This step causes the
+workflow to create a callback URL that your API stores and then wait for the
callback from your API when work is complete. Your API then calls back with an HTTP POST to the URL and passes any returned content and headers as input to the logic app.
-* `unsubscribe` endpoint: If the logic app run is canceled, the Logic Apps engine calls the `unsubscribe` endpoint. Your API can then unregister the callback URL and stop any processes as necessary.
+* `unsubscribe` endpoint: If the workflow run is canceled, the Azure Logic Apps engine calls the `unsubscribe` endpoint. Your API can then unregister the callback URL and stop any processes as necessary.
![Webhook action pattern](./media/logic-apps-create-api-app/custom-api-webhook-action-pattern.png)
-Currently, the Logic App Designer doesn't support discovering webhook endpoints through Swagger. So for this pattern, you have to add a [**Webhook** action](../connectors/connectors-native-webhook.md) and specify the URL, headers, and body for your request. See also [Workflow actions and triggers](logic-apps-workflow-actions-triggers.md#apiconnection-webhook-action). For an example webhook pattern, review this [webhook trigger sample in GitHub](https://github.com/logicappsio/LogicAppTriggersExample/blob/master/LogicAppTriggers/Controllers/WebhookTriggerController.cs).
+Currently, the workflow designer doesn't support discovering webhook endpoints through Swagger. So for this pattern, you have to add a [**Webhook** action](../connectors/connectors-native-webhook.md) and specify the URL, headers, and body for your request. See also [Workflow actions and triggers](logic-apps-workflow-actions-triggers.md#apiconnection-webhook-action). For an example webhook pattern, review this [webhook trigger sample in GitHub](https://github.com/logicappsio/LogicAppTriggersExample/blob/master/LogicAppTriggers/Controllers/WebhookTriggerController.cs).
Here are some other tips and notes: * To pass in the callback URL, you can use the `@listCallbackUrl()` workflow function in any of the previous fields as necessary.
-* If you own both the logic app and the subscribed service, you don't have to call the `unsubscribe` endpoint after the callback URL is called. Otherwise, the Logic Apps runtime needs to call the `unsubscribe` endpoint to signal that no more calls are expected and to allow for resource clean up on the server side.
+* If you own both the logic app resource and the subscribed service, you don't have to call the `unsubscribe` endpoint after the callback URL is called. Otherwise, the Azure Logic Apps runtime needs to call the `unsubscribe` endpoint to signal that no more calls are expected and to allow resource cleanup on the server side.
<a name="triggers"></a> ## Trigger patterns Your custom API can act as a [*trigger*](./logic-apps-overview.md#logic-app-concepts)
-that starts a logic app when new data or an event meets a specified condition.
+that starts a workflow run when new data or an event meets a specified condition.
This trigger can either check regularly, or wait and listen, for new data or events at your service endpoint. If new data or an event meets the specified condition,
Also, learn more about [usage metering for triggers](logic-apps-pricing.md).
### Check for new data or events regularly with the polling trigger pattern A *polling trigger* acts much like the [polling action](#async-pattern)
-previously described in this topic. The Logic Apps engine periodically
+previously described in this topic. The Azure Logic Apps engine periodically
calls and checks the trigger endpoint for new data or events. If the engine finds new data or an event that meets your specified condition,
-the trigger fires. Then, the engine creates a logic app instance that processes the data as input.
+the trigger fires. Then, the engine creates a workflow instance that processes the data as input.
![Polling trigger pattern](./media/logic-apps-create-api-app/custom-api-polling-trigger-pattern.png) > [!NOTE]
-> Each polling request counts as an action execution, even when no logic app instance is created.
+> Each polling request counts as an action execution, even when no workflow instance is created.
> To prevent processing the same data multiple times, > your trigger should clean up data that was already read and passed to the logic app.
Here are specific steps for a polling trigger, described from the API's perspect
| Found new data or event? | API response | | - | |
-| Found | Return an HTTP `200 OK` status with the response payload (input for next step). <br/>This response creates a logic app instance and starts the workflow. |
-| Not found | Return an HTTP `202 ACCEPTED` status with a `location` header and a `retry-after` header. <br/>For triggers, the `location` header should also contain a `triggerState` query parameter, which is usually a "timestamp." Your API can use this identifier to track the last time that the logic app was triggered. |
+| Found | Return an HTTP `200 OK` status with the response payload (input for next step). <br/>This response creates a workflow instance and starts the workflow. |
+| Not found | Return an HTTP `202 ACCEPTED` status with a `location` header and a `retry-after` header. <br/>For triggers, the `location` header should also contain a `triggerState` query parameter, which is usually a "timestamp." Your API can use this identifier to track the last time that the workflow was triggered. |
||| For example, to periodically check your service for new files,
you might build a polling trigger that has these behaviors:
A webhook trigger is a *push trigger* that waits and listens for new data or events at your service endpoint. If new data or an event meets the specified condition,
-the trigger fires and creates a logic app instance, which then processes the data as input.
+the trigger fires and creates a workflow instance, which then processes the data as input.
Webhook triggers act much like the [webhook actions](#webhook-actions) previously described in this topic, and are set up with `subscribe` and `unsubscribe` endpoints. * `subscribe` endpoint: When you add and save a webhook trigger in your logic app,
-the Logic Apps engine calls the `subscribe` endpoint. This step causes
-the logic app to create a callback URL that your API stores.
+the Azure Logic Apps engine calls the `subscribe` endpoint. This step causes
+the workflow to create a callback URL that your API stores.
When there's new data or an event that meets the specified condition, your API calls back with an HTTP POST to the URL. The content payload and headers pass as input to the logic app.
-* `unsubscribe` endpoint: If the webhook trigger or entire logic app is deleted, the Logic Apps engine calls the `unsubscribe` endpoint.
+* `unsubscribe` endpoint: If the webhook trigger or entire logic app resource is deleted, the Azure Logic Apps engine calls the `unsubscribe` endpoint.
Your API can then unregister the callback URL and stop any processes as necessary. ![Webhook trigger pattern](./media/logic-apps-create-api-app/custom-api-webhook-trigger-pattern.png)
-Currently, the Logic App Designer doesn't support discovering webhook endpoints through Swagger. So for this pattern, you have to add a [**Webhook** trigger](../connectors/connectors-native-webhook.md) and specify the URL, headers, and body for your request. See also [HTTPWebhook trigger](logic-apps-workflow-actions-triggers.md#httpwebhook-trigger). For an example webhook pattern, review this [webhook trigger controller sample in GitHub](https://github.com/logicappsio/LogicAppTriggersExample/blob/master/LogicAppTriggers/Controllers/WebhookTriggerController.cs).
+Currently, the workflow designer doesn't support discovering webhook endpoints through Swagger. So for this pattern, you have to add a [**Webhook** trigger](../connectors/connectors-native-webhook.md) and specify the URL, headers, and body for your request. See also [HTTPWebhook trigger](logic-apps-workflow-actions-triggers.md#httpwebhook-trigger). For an example webhook pattern, review this [webhook trigger controller sample in GitHub](https://github.com/logicappsio/LogicAppTriggersExample/blob/master/LogicAppTriggers/Controllers/WebhookTriggerController.cs).
Here are some other tips and notes:
Here are some other tips and notes:
* To prevent processing the same data multiple times, your trigger should clean up data that was already read and passed to the logic app.
-* If you own both the logic app and the subscribed service, you don't have to call the `unsubscribe` endpoint after the callback URL is called. Otherwise, the Logic Apps runtime needs to call the `unsubscribe` endpoint to signal that no more calls are expected and to allow for resource clean up on the server side.
+* If you own both the logic app resource and the subscribed service, you don't have to call the `unsubscribe` endpoint after the callback URL is called. Otherwise, the Logic Apps runtime needs to call the `unsubscribe` endpoint to signal that no more calls are expected and to allow resource cleanup on the server side.
## Improve security for calls to your APIs from logic apps
Learn [how to deploy and call custom APIs from logic apps](../logic-apps/logic-a
## Publish custom APIs to Azure
-To make your custom APIs available for other Logic Apps users in Azure,
-you must add security and register them as Logic App connectors.
+To make your custom APIs available for other Azure Logic Apps users,
+you must add security and register them as Azure Logic Apps connectors.
For more information, see [Custom connectors overview](../logic-apps/custom-connector-overview.md). To make your custom APIs available to all users in Logic Apps, Power Automate, and Microsoft Power Apps, you must add security,
-register your APIs as Logic App connectors, and nominate your connectors for the
+register your APIs as Azure Logic Apps connectors, and nominate your connectors for the
[Microsoft Azure Certified program](https://azure.microsoft.com/marketplace/programs/certified/logic-apps/). ## Get support
register your APIs as Logic App connectors, and nominate your connectors for the
* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To help improve Logic Apps, vote on or submit ideas at the
- [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
- ## Next steps * [Handle errors and exceptions](../logic-apps/logic-apps-exception-handling.md)
logic-apps Logic Apps Diagnosing Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-diagnosing-failures.md
ms.suite: integration Previously updated : 05/24/2022 Last updated : 08/20/2022 # Troubleshoot and diagnose workflow failures in Azure Logic Apps
logic-apps Logic Apps Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-gateway-install.md
ms.suite: integration Previously updated : 03/16/2021 Last updated : 08/20/2022 #Customer intent: As a software developer, I want to install and set up the on-premises data gateway so that I can create logic app workflows that can access data in on-premises systems.
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 08/16/2022 Last updated : 08/22/2022 tags: connectors
If you're receiving this error message and experience intermittent failures call
1. Check SAP settings in your on-premises data gateway service configuration file, `Microsoft.PowerBI.EnterpriseGateway.exe.config`.
- The retry count setting looks like `WebhookRetryMaximumCount="2"`. The retry interval setting looks like `WebhookRetryDefaultDelay="00:00:00.10"` and the timespan format is `HH:mm:ss.ff`.
+ 1. Under the `configuration` root node, add a `configSections` element, if none exists.
+ 1. Under the `configSections` node, add a `section` element with the following attributes, if none exist: `name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"`
+
+ > [!IMPORTANT]
+ > Don't change the attributes in existing `section` elements, if such elements already exist.
+
+ Your `configSections` element looks like the following version, if no other section or section group is declared in the gateway service configuration:
+
+ ```xml
+ <configSections>
+ <section name="SapAdapterSection" type="Microsoft.Adapters.SAP.Common.SapAdapterSection, Microsoft.Adapters.SAP.Common"/>
+ </configSections>
+ ```
+
+ 1. Under the `configuration` root node, add an `SapAdapterSection` element, if none exists.
+ 1. Under the `SapAdapterSection` node, add a `Broker` element with the following attributes, if none exist: `WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2"`
+
+ > [!IMPORTANT]
+ > Change the attributes for the `Broker` element, even if the element already exists.
+
+ The `SapAdapterSection` element looks like the following version, if no other element or attribute is declared in the SAP adapter configuration:
+
+ ```xml
+ <SapAdapterSection>
+ <Broker WebhookRetryDefaultDelay="00:00:00.10" WebhookRetryMaximumCount="2" />
+ </SapAdapterSection>
+ ```
+
+ The retry count setting looks like `WebhookRetryMaximumCount="2"`. The retry interval setting looks like `WebhookRetryDefaultDelay="00:00:00.10"` where the timespan format is `HH:mm:ss.ff`.
+
+ > [!NOTE]
+ > For more information about the configuration file,
+ > review [Configuration file schema for .NET Framework](/dotnet/framework/configure-apps/file-schema/).
1. Save your changes. Restart your on-premises data gateway.
logic-apps Support Non Unicode Character Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/support-non-unicode-character-encoding.md
Title: Convert non-Unicode encoded text for compatibility description: Handle non-Unicode characters in Azure Logic Apps by converting text payloads to UTF-8 with base64 encoding and Azure Functions. Previously updated : 10/05/2021+ - Last updated : 08/20/2022
-# Support non-Unicode character encoding in Logic Apps
+
+# Support non-Unicode character encoding in Azure Logic Apps
When you work with text payloads, Azure Logic Apps infers the text is encoded in a Unicode format, such as UTF-8. You might have problems receiving, sending, or processing characters with different encodings in your workflow. For example, you might get corrupted characters in flat files when working with legacy systems that don't support Unicode.
If you set the `Content-Type` header to `application/octet-stream`, you also mig
## Base64 encode content
-Before you [base64 encode](workflow-definition-language-functions-reference.md#base64) content, make sure you've [converted the text to UTF-8](#convert-payload-encoding). If you base64 decode the content to a string before converting the text to UTF-8, characters might return corrupted.
+Before you [base64 encode](workflow-definition-language-functions-reference.md#base64) content to a string, make sure that you [converted the text to UTF-8](#convert-payload-encoding). Otherwise, characters might return corrupted.
Next, convert any .NET-supported encoding to another .NET-supported encoding. Review the [Azure Functions code example](#azure-functions-version) or the [.NET code example](#net-version):
Using these same concepts, you can also [send a non-Unicode payload from your wo
## Sample payload conversions
-In this example, the base64-encoded sample input string is a personal name, *H&eacute;lo&iuml;se*, that contains accented characters.
+In this example, the base64-encoded sample input string is a personal name that contains accented characters: *H&eacute;lo&iuml;se*
Example input:
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
To use the key when deploying a model to Azure Container Instance, create a new
For more information on creating and using a deployment configuration, see the following articles: * [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference
-* [Where and how to deploy](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
+* [Where and how to deploy](how-to-deploy-managed-online-endpoints.md)
For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key).
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
# What is "human data" and why is it important to source responsibly? + Human data is data collected directly from, or about, people. Human data may include personal data such as names, age, images, or voice clips and sensitive data such as genetic data, biometric data, gender identity, religious beliefs, or political affiliations. Collecting this data can be important to building AI systems that work for all users. But certain practices should be avoided, especially ones that can cause physical and psychological harm to data contributors.
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
For no-code-deployment, Azure Machine Learning
There are three workflows for deploying MLflow models to Azure Machine Learning: - [Deploy using the MLflow plugin](#deploy-using-the-mlflow-plugin)-- [Deploy using Azure ML CLI (v2)](#deploy-using-azure-ml-cli-v2)
+- [Deploy using Azure ML CLI/SDK (v2)](#deploy-using-azure-ml-clisdk-v2)
- [Deploy using Azure Machine Learning studio](#deploy-using-azure-machine-learning-studio) Each workflow has different capabilities, particularly around which type of compute they can target. The following table shows them:
The MLflow plugin [azureml-mlflow](https://pypi.org/project/azureml-mlflow/) can
) ```
-## Deploy using Azure ML CLI (v2)
+## Deploy using Azure ML CLI/SDK (v2)
-You can use Azure ML CLI v2 to deploy models trained and logged with MLflow to [managed endpoints (Online/batch)](concept-endpoints.md). When you deploy your MLflow model using the Azure ML CLI v2, it's a no-code-deployment so you don't have to provide a scoring script or an environment, but you can if needed.
+You can use Azure ML CLI/SDK v2 to deploy models trained and logged with MLflow to [managed endpoints (Online/batch)](concept-endpoints.md). Deployment of MLflow models support no-code-deployment, so you don't have to provide a scoring script or an environment, but you can if needed.
### Prerequisites
You can use Azure ML CLI v2 to deploy models trained and logged with MLflow to [
* You must have a MLflow model. If your model is not in MLflow format and you want to use this feature, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md). -
-In this code snippet used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
-- ### Steps - This example shows how you can deploy an MLflow model to an online endpoint using CLI (v2). > [!IMPORTANT] > For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
-1. Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint:
+1. Connect to Azure Machine Learning workspace
- __create-endpoint.yaml__
+ # [Azure ML CLI (v2)](#tab/cli)
+
+ ```bash
+ az account set --subscription <subscription>
+ az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ ```
+
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
+
+ The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+
+ 1. Import the required libraries:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment, Model
+ from azure.ai.ml.constants import AssetType
+ from azure.identity import DefaultAzureCredential
+ ```
+
+ 2. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
+
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+1. The following example configures the name and authentication mode of the endpoint:
+
+ # [Azure ML CLI (v2)](#tab/cli)
+
+ Create a YAML configuration file for your endpoint:
+
+ __create-endpoint.yaml__
-1. To create a new endpoint using the YAML configuration, use the following command:
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
+
+ Create an endpoint using the SDK:
+
+ ```python
+ endpoint = ManagedOnlineEndpoint(
+ name="my-endpoint",
+ description="this is a sample local endpoint",
+ auth_mode="key"
+ )
+ ```
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+1. Execute the endpoint creation. This operation will create the endpoint in the Azure Machine Learning workspace:
+
+ # [Azure ML CLI (v2)](#tab/cli)
+
+ To create a new endpoint using the YAML configuration, use the following command:
-1. Create a YAML configuration file for the deployment.
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
- # [From a training job](#tab/fromjob)
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
+
+ To create a new endpoint using the endpoint configuration just created, use the following command:
+
+ ```python
+ ml_client.online_endpoints.begin_create_or_update(endpoint)
+ ```
+
+1. Before going further, we need to register the model we want to deploy. Deployment of unregistered models is not supported in Azure Machine Learning.
- The following example configures a deployment `sklearn-diabetes` to the endpoint created in the previous step. The model is registered from a job previously run:
+ # [Azure ML CLI (v2)](#tab/cli)
- a. Get the job name of the training job. In this example we are assuming the job you want is the last one submitted to the platform.
+ We first need to register the model we want to deploy. Deployment of unregistered models is not supported in Azure Machine Learning.
+ #### From a training job
+
+ In this example, the model is registered from a job previously run. Assuming that your model was registered with an instruction similar like this:
+
+ ```python
+ mlflow.sklearn.log_model(scikit_model, "model")
+ ```
+
+ To register the model from a previous run we would need the job name/run ID in question. For simplicity, let's assume that we are looking to register the model trained in the last run submitted to the workspace:
+ ```bash JOB_NAME=$(az ml job list --query "[0].name" | tr -d '"') ```+
+ Then, let's register the model in the registry.
+
+ ```bash
+ az ml model create --name "mir-sample-sklearn-mlflow-model" \
+ --type "mlflow_model" \
+ --path "azureml://jobs/$JOB_NAME/outputs/artifacts/model"
+ ```
+
+ #### From a local model
- b. Register the model in the registry.
+ If your model is located in the local file system or compute, then you can register it as follows:
```bash az ml model create --name "mir-sample-sklearn-mlflow-model" \ --type "mlflow_model" \
- --path "azureml://jobs/$JOB_NAME/outputs/artifacts/model"
+ --path "sklearn-diabetes/model"
```
+
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
- c. Create the deployment `YAML` file:
+ We first need to register the model we want to deploy. Deployment of unregistered models is not supported in Azure Machine Learning.
- __sklearn-deployment.yaml__
+ #### From a training job
+
+ In this example, the model is registered from a job previously run. Assuming that your model was registered with an instruction similar like this:
+
+ ```python
+ mlflow.sklearn.log_model(scikit_model, "model)
+ ```
+
+ To register the model from a previous run we would need the job name/run ID in question. For simplicity, let's assume that we are looking to register the model trained in the last run submitted to the workspace:
+
+ ```python
+ job_name = ml_client.jobs.list()[0].name
+ ```
+
+ Then, let's register the model in the registry.
+
+ ```python
+ model = Model(name="mir-sample-sklearn-mlflow-model",
+ path=f"azureml://jobs/{job_name}/outputs/artifacts/model",
+ type=AssetType.MLFLOW_MODEL)
+ ml_client.models.create_or_update(model)
+ ```
+
+ #### From a local model
+ If your model is located in the local file system or compute, then you can register it as follows:
+
+ ```
+ model = Model(name="mir-sample-sklearn-mlflow-model",
+ path="sklearn-diabetes/model",
+ type=AssetType.MLFLOW_MODEL)
+ ml_client.models.create_or_update(model)
+ ```
+
+1. Once the endpoint is created, we need to create a deployment on it. Remember that endpoints can contain one or multiple deployments and traffic can be configured for each of them. In this example, we are going to create only one deployment to serve all the traffic, named `sklearn-deployment`.
+
+ # [Azure ML CLI (v2)](#tab/cli)
+
+ Create the deployment `YAML` file:
+
+ __sklearn-deployment.yaml__
+ ```yaml $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json name: sklearn-deployment
This example shows how you can deploy an MLflow model to an online endpoint usin
instance_type: Standard_DS2_v2 instance_count: 1 ```+
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
+
+ ```python
+ blue_deployment = ManagedOnlineDeployment(
+ name="sklearn-deployment",
+ endpoint_name="my-endpoint",
+ model=model,
+ instance_type="Standard_F2s_v2",
+ instance_count=1,
+ )
+ ```
+
+
+1. Create the deployment and assign all the traffic to it.
- > [!IMPORTANT]
- > For MLflow no-code-deployment (NCD) to work, setting **`type`** to **`mlflow_model`** is required, `type: mlflow_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
+ # [Azure ML CLI (v2)](#tab/cli)
- # [From a local model](#tab/fromlocal)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
- The following example configures a deployment `sklearn-diabetes` to the endpoint created in the previous step using the local MLflow model:
-
- __sklearn-deployment.yaml__
-
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
-
- > [!IMPORTANT]
- > For MLflow no-code-deployment (NCD) to work, setting **`type`** to **`mlflow_model`** is required, `type: mlflow_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
-
-1. To create the deployment using the YAML configuration, use the following command:
-
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+ # [Azure ML SDK for Python (v2)](#tab/sdk)
+
+ ```python
+ ml_client.begin_create_or_update(blue_deployment)
+ endpoint.traffic = {"sklearn-deployment": 100}
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+1. Once the deployment is completed, the service is ready to receive requests. If you are not sure about how to submit requests to the service, see [Creating requests](#creating-requests).
## Deploy using Azure Machine Learning studio
You can use [Azure Machine Learning studio](https://ml.azure.com) to deploy mode
:::image type="content" source="./media/how-to-manage-models/register-model-as-asset.png" alt-text="Screenshot of the UI to register a model." lightbox="./media/how-to-manage-models/register-model-as-asset.png":::
-2. From [studio](https://ml.azure.com), select your workspace and then use either the __endpoints__ or __models__ page to create the endpoint deployment:
-
- # [Endpoints page](#tab/endpoint)
-
- 1. From the __Endpoints__ page, Select **+Create**.
-
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
-
- 1. Provide a name and authentication type for the endpoint, and then select __Next__.
- 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
+2. From [studio](https://ml.azure.com), select your workspace and then use either the __endpoints__ page to create the endpoint deployment:
- 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
+ a. From the __Endpoints__ page, Select **+Create**.
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models.":::
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
- 1. Complete the wizard to deploy the model to the endpoint.
+ b. Provide a name and authentication type for the endpoint, and then select __Next__.
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen.":::
+ c. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
- # [Models page](#tab/models)
+ d. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
- 1. Select the MLflow model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__.
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models.":::
- :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" alt-text="Screenshot showing how to deploy model from Models UI.":::
+ e. Complete the wizard to deploy the model to the endpoint.
- 1. Complete the wizard to deploy the model to the endpoint.
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" li