Updates from: 06/07/2022 01:18:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
This article provides you with the information you need to configure wildcard ap
- Note - Regular application will always take precedence over a complex app (wildcard application). ## Pre-requisites
-Before you get started with single sign-on for header-based authentication apps, make sure your environment is ready with the following settings and configurations:
+Before you get started with Application Proxy Complex application scenario apps, make sure your environment is ready with the following settings and configurations:
- You need to enable Application Proxy and install a connector that has line of site to your applications. See the tutorial [Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad) to learn how to prepare your on-premises environment, install and register a connector, and test the connector.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | | OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ |
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
The following providers offer FIDO2 security keys of different form factors that
| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | + <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Title: Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods
-description: Learn how to configure and enable users to to register Passwordless authentication methods by using a Temporary Access Pass
+description: Learn how to configure and enable users to register Passwordless authentication methods by using a Temporary Access Pass
Previously updated : 10/22/2021 Last updated : 05/24/2022 -+ -+
-# Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods (Preview)
+# Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods
Passwordless authentication methods, such as FIDO2 and Passwordless Phone Sign-in through the Microsoft Authenticator app, enable users to sign in securely without a password. Users can bootstrap Passwordless methods in one of two ways:
Users can bootstrap Passwordless methods in one of two ways:
- Using existing Azure AD Multi-Factor Authentication methods - Using a Temporary Access Pass (TAP)
-A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
+A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones such as Microsoft Authenticator or even Windows Hello.
A Temporary Access Pass also makes recovery easier when a user has lost or forgotten their strong authentication factor like a FIDO2 security key or Microsoft Authenticator app, but needs to sign in to register new strong authentication methods. This article shows you how to enable and use a Temporary Access Pass in Azure AD using the Azure portal. You can also perform these actions using the REST APIs.
->[!NOTE]
->Temporary Access Pass is currently in public preview. Some features might not be supported or have limited capabilities.
- ## Enable the Temporary Access Pass policy A Temporary Access Pass policy defines settings, such as the lifetime of passes created in the tenant, or the users and groups who can use a Temporary Access Pass to sign-in.
-Before anyone can sign in with a Temporary Access Pass, you need to enable the authentication method policy and choose which users and groups can sign in by using a Temporary Access Pass.
+Before anyone can sign-in with a Temporary Access Pass, you need to enable Temporary Access Pass in the authentication method policy and choose which users and groups can sign in by using a Temporary Access Pass.
Although you can create a Temporary Access Pass for any user, only those included in the policy can sign-in with it. Global administrator and Authentication Method Policy administrator role holders can update the Temporary Access Pass authentication method policy. To configure the Temporary Access Pass authentication method policy:
-1. Sign in to the Azure portal as a Global admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
-1. Click **Yes** to enable the policy, select which users have the policy applied, and any **General** settings.
+1. Sign in to the Azure portal as a Global admin or Authentication Policy admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
+![Screenshot of how to manage Temporary Access Pass within the authentication method policy experience.](./media/how-to-authentication-temporary-access-pass/policy.png)
+1. Set Enable to **Yes** to enable the policy, select which users have the policy applied.
+![Screenshot of how to enable the Temporary Access Pass authentication method policy.](./media/how-to-authentication-temporary-access-pass/policy-scope.png)
+1. (Optional) Click **Configure** and modify the default Temporary Access Pass settings, such as setting maximum lifetime, or length.
+![Screenshot of how to customize the settings for Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/policy-settings.png)
+1. Click **Save** to apply the policy.
+
- ![Screenshot of how to enable the Temporary Access Pass authentication method policy](./media/how-to-authentication-temporary-access-pass/policy.png)
The default value and the range of allowed values are described in the following table.
To configure the Temporary Access Pass authentication method policy:
| Setting | Default values | Allowed values | Comments | ||||| | Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. |
- | Maximum lifetime | 24 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
+ | Maximum lifetime | 8 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
| Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. | | One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. | | Length | 8 | 8-48 characters | Defines the length of the passcode. |
These roles can perform the following actions related to a Temporary Access Pass
1. Click **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**. 1. If needed, select the option to **Try the new user authentication methods experience**. 1. Select the option to **Add authentication methods**.
-1. Below **Choose method**, click **Temporary Access Pass (Preview)**.
+1. Below **Choose method**, click **Temporary Access Pass**.
1. Define a custom activation time or duration and click **Add**.
- ![Screenshot of how to create a Temporary Access Pass](./media/how-to-authentication-temporary-access-pass/create.png)
+ ![Screenshot of how to create a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/create.png)
1. Once added, the details of the Temporary Access Pass are shown. Make a note of the actual Temporary Access Pass value. You provide this value to the user. You can't view this value after you click **Ok**.
- ![Screenshot of Temporary Access Pass details](./media/how-to-authentication-temporary-access-pass/details.png)
+ ![Screenshot of Temporary Access Pass details.](./media/how-to-authentication-temporary-access-pass/details.png)
The following commands show how to create and get a Temporary Access Pass by using PowerShell:
The following commands show how to create and get a Temporary Access Pass by usi
# Create a Temporary Access Pass for a user $properties = @{} $properties.isUsableOnce = $True
-$properties.startDateTime = '2021-03-11 06:00:00'
+$properties.startDateTime = '2022-05-23 06:00:00'
$propertiesJSON = $properties | ConvertTo-Json New-MgUserAuthenticationTemporaryAccessPassMethod -UserId user2@contoso.com -BodyParameter $propertiesJSON Id CreatedDateTime IsUsable IsUsableOnce LifetimeInMinutes MethodUsabilityReason StartDateTime TemporaryAccessPass -- -- -- - -
-c5dbd20a-8b8f-4791-a23f-488fcbde3b38 9/03/2021 11:19:17 PM False True 60 NotYetValid 11/03/2021 6:00:00 AM TAPRocks!
+c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True 60 NotYetValid 23/05/2022 6:00:00 AM TAPRocks!
# Get a user's Temporary Access Pass Get-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com Id CreatedDateTime IsUsable IsUsableOnce LifetimeInMinutes MethodUsabilityReason StartDateTime TemporaryAccessPass -- -- -- - -
-c5dbd20a-8b8f-4791-a23f-488fcbde3b38 9/03/2021 11:19:17 PM False True 60 NotYetValid 11/03/2021 6:00:00 AM
+c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True 60 NotYetValid 23/05/2022 6:00:00 AM
``` ## Use a Temporary Access Pass
-The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
+The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in or device setup, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
1. Open a web browser to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). 1. Enter the UPN of the account you created the Temporary Access Pass for, such as *tapuser@contoso.com*. 1. If the user is included in the Temporary Access Pass policy, they will see a screen to enter their Temporary Access Pass. 1. Enter the Temporary Access Pass that was displayed in the Azure portal.
- ![Screenshot of how to enter a Temporary Access Pass](./media/how-to-authentication-temporary-access-pass/enter.png)
+ ![Screenshot of how to enter a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/enter.png)
>[!NOTE] >For federated domains, a Temporary Access Pass is preferred over federation. A user with a Temporary Access Pass will complete the authentication in Azure AD and will not get redirected to the federated Identity Provider (IdP).
The user is now signed in and can update or register a method such as FIDO2 secu
Users who update their authentication methods due to losing their credentials or device should make sure they remove the old authentication methods. Users can also continue to sign-in by using their password; a TAP doesnΓÇÖt replace a userΓÇÖs password. +
+### User management of Temporary Access Pass
+
+Users managing their security information at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) will see an entry for the Temporary Access Pass. If a user does not have any other registered methods they will be presented a banner at the top of the screen requesting them to add a new sign-in method. Users can additionally view the TAP expiration time, and delete the TAP if no longer needed.
+
+![Screenshot of how users can manage a Temporary Access Pass in My Security Info.](./media/how-to-authentication-temporary-access-pass/tap-my-security-info.png)
+
+### Windows device setup
+Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello For Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state:
+- During Azure AD Join setup, users can authenticate with a TAP (no password required) and setup Windows Hello for Business.
+- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business.
+- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business.
+
+![Screenshot of how to enter Temporary Access Pass when setting up Windows 10.](./media/how-to-authentication-temporary-access-pass/windows-10-tap.png)
+ ### Passwordless phone sign-in Users can also use their Temporary Access Pass to register for Passwordless phone sign-in directly from the Authenticator app. For more information, see [Add your work or school account to the Microsoft Authenticator app](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c).
-![Screenshot of how to enter a Temporary Access Pass using work or school account](./media/how-to-authentication-temporary-access-pass/enter-work-school.png)
+![Screenshot of how to enter a Temporary Access Pass using work or school account.](./media/how-to-authentication-temporary-access-pass/enter-work-school.png)
### Guest access
Users need to reauthenticate with different authentication methods after the Tem
Under the **Authentication methods** for a user, the **Detail** column shows when the Temporary Access Pass expired. You can delete an expired Temporary Access Pass using the following steps: 1. In the Azure AD portal, browse to **Users**, select a user, such as *Tap User*, then choose **Authentication methods**.
-1. On the right-hand side of the **Temporary Access Pass (Preview)** authentication method shown in the list, select **Delete**.
+1. On the right-hand side of the **Temporary Access Pass** authentication method shown in the list, select **Delete**.
You can also use PowerShell:
Remove-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com -
- A user can only have one Temporary Access Pass. The passcode can be used during the start and end time of the Temporary Access Pass. - If the user requires a new Temporary Access Pass:
- - If the existing Temporary Access Pass is valid, the admin needs to delete the existing Temporary Access Pass and create a new pass for the user.
+ - If the existing Temporary Access Pass is valid, the admin can create a new Temporary Access Pass which will override the existing valid Temporary Access Pass.
- If the existing Temporary Access Pass has expired, a new Temporary Access Pass will override the existing Temporary Access Pass. For more information about NIST standards for onboarding and recovery, see [NIST Special Publication 800-63A](https://pages.nist.gov/800-63-3/sp800-63a.html#sec4).
For more information about NIST standards for onboarding and recovery, see [NIST
Keep these limitations in mind: - When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation does not apply to a Temporary Access Pass that can be used more than once.-- Temporary Access Pass is in public preview and currently not available in Azure for US Government. - Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration. -- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE), Autopilot, or to deploy Windows Hello for Business.
+- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
## Troubleshooting
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management
-description: Frequently asked questions (FAQs) about CloudKnox Permissions Management.
+ Title: Frequently asked questions (FAQs) about Permissions Management
+description: Frequently asked questions (FAQs) about Permissions Management.
# Frequently asked questions (FAQs) > [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Entra Permissions Management is currently in PREVIEW.
> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. > [!NOTE]
-> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-This article answers frequently asked questions (FAQs) about CloudKnox Permissions Management (CloudKnox).
+This article answers frequently asked questions (FAQs) about Permissions Management.
-## What's CloudKnox Permissions Management?
+## What's Permissions Management?
-CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). CloudKnox detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
-## What are the prerequisites to use CloudKnox?
+## What are the prerequisites to use Permissions Management?
-CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox.
+Permissions Management supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use Permissions Management.
-## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
+## Can a customer use Permissions Management if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
Yes, a customer can detect, mitigate, and monitor the risk of 'backdoor' accounts that are local to AWS IAM, GCP, or from other identity providers such as Okta or AWS IAM.
-## Where can customers access CloudKnox?
+## Where can customers access Permissions Management?
-Customers can access the CloudKnox interface with a link from the Azure AD extension in the Azure portal.
+Customers can access the Permissions Management interface with a link from the Azure AD extension in the Azure portal.
-## Can non-cloud customers use CloudKnox on-premises?
+## Can non-cloud customers use Permissions Management on-premises?
-No, CloudKnox is a hosted cloud offering.
+No, Permissions Management is a hosted cloud offering.
-## Can non-Azure customers use CloudKnox?
+## Can non-Azure customers use Permissions Management?
-Yes, non-Azure customers can use our solution. CloudKnox is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+Yes, non-Azure customers can use our solution. Permissions Management is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
-## Is CloudKnox available for tenants hosted in the European Union (EU)?
+## Is Permissions Management available for tenants hosted in the European Union (EU)?
-No, the CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+No, the Permissions Management Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does CloudKnox provide?
+## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
-CloudKnox complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while CloudKnox allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
-## What languages does CloudKnox support?
+## What languages does Permissions Management support?
-CloudKnox currently supports English.
+Permissions Management currently supports English.
-## What public cloud infrastructures are supported by CloudKnox?
+## What public cloud infrastructures are supported by Permissions Management?
-CloudKnox currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
+Permissions Management currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
-## Does CloudKnox support hybrid environments?
+## Does Permissions Management support hybrid environments?
-CloudKnox currently doesn't support hybrid environments.
+Permissions Management currently doesn't support hybrid environments.
-## What types of identities are supported by CloudKnox?
+## What types of identities are supported by Permissions Management?
-CloudKnox supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
+Permissions Management supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
-<!## Is CloudKnox General Data Protection Regulation (GDPR) compliant?
+<!## Is Permissions Management General Data Protection Regulation (GDPR) compliant?
-CloudKnox is currently not GDPR compliant.>
+Permissions Management is currently not GDPR compliant.>
-## Is CloudKnox available in Government Cloud?
+## Is Permissions Management available in Government Cloud?
-No, CloudKnox is currently not available in Government clouds.
+No, Permissions Management is currently not available in Government clouds.
-## Is CloudKnox available for sovereign clouds?
+## Is Permissions Management available for sovereign clouds?
-No, CloudKnox is currently not available in sovereign Clouds.
+No, Permissions Management is currently not available in sovereign Clouds.
-## How does CloudKnox collect insights about permissions usage?
+## How does Permissions Management collect insights about permissions usage?
-CloudKnox has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
+Permissions Management has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
-## How does CloudKnox evaluate cloud permissions risk?
+## How does Permissions Management evaluate cloud permissions risk?
-CloudKnox offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
+Permissions Management offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
## What is the Permissions Creep Index? The Permissions Creep Index (PCI) is a quantitative measure of risk associated with an identity or role determined by comparing permissions granted versus permissions exercised. It allows users to instantly evaluate the level of risk associated with the number of unused or over-provisioned permissions across identities and resources. It measures how much damage identities can cause based on the permissions they have.
-## How can customers use CloudKnox to delete unused or excessive permissions?
+## How can customers use Permissions Management to delete unused or excessive permissions?
-CloudKnox allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
+Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
-## How can customers grant permissions on-demand with CloudKnox?
+## How can customers grant permissions on-demand with Permissions Management?
For any break-glass or one-off scenarios where an identity needs to perform a specific set of actions on a set of specific resources, the identity can request those permissions on-demand for a limited period with a self-service workflow. Customers can either use the built-in workflow engine or their IT service management (ITSM) tool. The user experience is the same for any identity type, identity source (local, enterprise directory, or federated) and cloud.
For any break-glass or one-off scenarios where an identity needs to perform a sp
Just-in-time (JIT) access is a method used to enforce the principle of least privilege to ensure identities are given the minimum level of permissions to perform the task at hand. Permissions on-demand are a type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis.
-## How can customers monitor permissions usage with CloudKnox?
+## How can customers monitor permissions usage with Permissions Management?
-Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their CloudKnox dashboard where they can see how the PCI of each identity or resource is evolving over time.
+Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their Permissions Management dashboard where they can see how the PCI of each identity or resource is evolving over time.
## Can customers generate permissions usage reports?
-Yes, CloudKnox has various types of system report available that capture specific data sets. These reports allow customers to:
+Yes, Permissions Management has various types of system report available that capture specific data sets. These reports allow customers to:
- Make timely decisions. - Analyze usage trends and system/user performance. - Identify high-risk areas. For information about permissions usage reports, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
-## Does CloudKnox integrate with third-party ITSM (Information Technology Security Management) tools?
+## Does Permissions Management integrate with third-party ITSM (Information Technology Security Management) tools?
-CloudKnox integrates with ServiceNow.
+Permissions Management integrates with ServiceNow.
+## How is Permissions Management being deployed?
-## How is CloudKnox being deployed?
+Customers with Global Admin role have first to onboard Permissions Management on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
-Customers with Global Admin role have first to onboard CloudKnox on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
-
-## How long does it take to deploy CloudKnox?
+## How long does it take to deploy Permissions Management?
It depends on each customer and how many AWS accounts, GCP projects, and Azure subscriptions they have.
-## Once CloudKnox is deployed, how fast can I get permissions insights?
+## Once Permissions Management is deployed, how fast can I get permissions insights?
Once fully onboarded with data collection set up, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
-## Is CloudKnox collecting and storing sensitive personal data?
+## Is Permissions Management collecting and storing sensitive personal data?
-No, CloudKnox doesn't have access to sensitive personal data.
+No, Permissions Management doesn't have access to sensitive personal data.
-## Where can I find more information about CloudKnox?
+## Where can I find more information about Permissions Management?
You can read our blog and visit our web page. You can also get in touch with your Microsoft point of contact to schedule a demo. ## Resources - [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog)-- [CloudKnox Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)-
+- [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)
## Next steps -- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](overview.md).-- For information on how to onboard CloudKnox in your organization, see [Enable CloudKnox in your organization](onboard-enable-tenant.md).
+- For an overview of Permissions Management, see [What's Permissions Management Permissions Management?](overview.md).
+- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
Title: Acquire a token to call a web API using device code flow (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using device code flow -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Integrated Windows Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-integrated-windows-authentication.md
Title: Acquire a token to call a web API using integrated Windows authentication (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using integrated Windows authentication -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
Title: Acquire a token to call a web API interactively (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
Title: Acquire a token to call a web API using username and password (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using username and password. -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Title: Acquire a token to call a web API using web account manager (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using web account manager -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Title: Acquire a token to call a web API (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
let accounts = await msalTokenCache.getAllAccounts();
const tokenRequest = { code: response["authorization_code"],
- codeVerifier: verifier // PKCE Code Verifier
+ codeVerifier: verifier, // PKCE Code Verifier
redirectUri: "your_redirect_uri", scopes: ["User.Read"], };
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
Minimize the usage of the 'match' operator in rules as much as possible. Instead
It's better to use rules like: -- `user.city -contains "ago,"`-- `user.city -startswith "Lag,"`
+- `user.city -contains "ago"`
+- `user.city -startswith "Lag"`
Or, best of all:
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
1. Select **Next: Configuration** to move on to the Configuration tab.
+1. On the Configuration tab, enter the following information:
+ ![Azure Active Directory - Create a tenant page - configuration tab ](media/active-directory-access-create-new-tenant/azure-ad-create-new-tenant.png)
-1. On the Configuration tab, enter the following information:
-
- Type _Contoso Organization_ into the **Organization name** box. - Type _Contosoorg_ into the **Initial domain name** box.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
You can create an alert that notifies appropriate administrators when a device i
``` Sign-in logs
-| where ResourceDisplayName == ΓÇ£Device Registration ServiceΓÇ¥
+| where ResourceDisplayName == "Device Registration Service"
-| where conditionalAccessStatus ==ΓÇ¥successΓÇ¥
+| where conditionalAccessStatus == "success"
-| where AuthenticationRequirement <> ΓÇ£multiFactorAuthenticationΓÇ¥
+| where AuthenticationRequirement <> "multiFactorAuthentication"
``` You can also use [Microsoft Intune to set and monitor device compliance policies](/mem/intune/protect/device-compliance-get-started).
It might not be possible to block access to all cloud and software-as-a-service
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant ==false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
+| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
| Sign-ins by unknown devices| Low| Sign-in logs| <li>DeviceDetail is empty<li>Single factor authentication<li>From a non-trusted location| Look for: <br><li>any access from out of compliance devices.<li>any access without MFA or trusted location |
It might not be possible to block access to all cloud and software-as-a-service
``` SigninLogs
-| where DeviceDetail.isCompliant ==false
+| where DeviceDetail.isCompliant == false
-| where conditionalAccessStatus == ΓÇ£successΓÇ¥
+| where conditionalAccessStatus == "success"
```
Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/w
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker keyΓÇ¥| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
+| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
In LogAnalytics create a query such as
In LogAnalytics create a query such as
``` AuditLogs
-| where OperationName == "Read BitLocker keyΓÇ¥
+| where OperationName == "Read BitLocker key"
``` ## Device administrator roles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2022
+
+### New articles
+
+- [My Apps portal overview](myapps-overview.md)
+
+### Updated articles
+
+- [Tutorial: Configure Datawiza with Azure Active Directory for secure hybrid access](datawiza-with-azure-ad.md)
+- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
+- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation-to-azure-active-directory.md)
+- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
+ ## March 2022 ### New articles
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
Title: Configure security alerts for Azure resource roles in Privileged Identity Management - Azure Active Directory | Microsoft Docs
+ Title: Configure security alerts for Azure roles in Privileged Identity Management - Azure Active Directory | Microsoft Docs
description: Learn how to configure security alerts for Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
na Previously updated : 10/07/2021 Last updated : 06/03/2022
-# Configure security alerts for Azure resource roles in Privileged Identity Management
+# Configure security alerts for Azure roles in Privileged Identity Management
Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your Azure Active Directory (Azure AD) organization. When an alert is triggered, it shows up on the Alerts page.
Select an alert to see a report that lists the users or roles that triggered the
## Alerts
-| Alert | Severity | Trigger | Recommendation |
-| | | | |
-| **Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles. |
-| **Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use. |
-| **Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles. |
+Alert | Severity | Trigger | Recommendation
+ | | |
+**Too many owners assigned to a resource** | Medium | Too many users have the owner role. | Review the users in the list and reassign some to less privileged roles.
+**Too many permanent owners assigned to a resource** | Medium | Too many users are permanently assigned to a role. | Review the users in the list and re-assign some to require activation for role use.
+**Duplicate role created** | Medium | Multiple roles have the same criteria. | Use only one of these roles.
+**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource blade or the Azure Resource Manager API | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
+
+> [!NOTE]
+> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level.
### Severity
active-directory Blinq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Navigate to [Blinq Admin Console](https://dash.blinq.me) in a separate browser tab. 1. If you aren't logged in to Blinq you will need to do so.
-1. Click on your workspace in the top left corner of the screen.
-1. In the dropdown click **Settings**.
+1. Click on your workspace in the top left hand corner of the screen and select **Settings** in the dropdown menu.
+
+ [![Screenshot of the Blinq settings option.](media/blinq-provisioning-tutorial/blinq-settings.png)](media/blinq-provisioning-tutorial/blinq-settings.png#lightbox)
+ 1. Under the **Integrations** page you should see **Team Card Provisioning** which contains a URL and Token. You will need to generate the token by clicking **Generate**. Copy the **URL** and **Token**. The URL and the Token are to be inserted into the **Tenant URL*** and **Secret Token** field in the Azure portal respectively.
+ [![Screenshot of the Blinq integration page.](media/blinq-provisioning-tutorial/blinq-integrations-page.png)](media/blinq-provisioning-tutorial/blinq-integrations-page.png#lightbox)
+ ## Step 3. Add Blinq from the Azure AD application gallery Add Blinq from the Azure AD application gallery to start managing provisioning to Blinq. If you have previously setup Blinq for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the
### Additional prerequisites
+> [!NOTE]
+> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service is not supported.
+ The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations: * Only linux node clusters and node pools are supported.
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
You must ensure that:
* The secret is created in the `kube-system` namespace. ```yaml
-apiVerison: v1
+apiVersion: v1
kind: Secret metadata: name: custom-ca-trust-secret
For more information on AKS security best practices, see [Best practices for clu
[az-extension-update]: /cli/azure/extension#az-extension-update [az-feature-list]: /cli/azure/feature#az-feature-list [az-feature-register]: /cli/azure/feature#az-feature-register
-[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-provider-register]: /cli/azure/provider#az-provider-register
aks Deployment Center Launcher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md
Title: Deployment Center for Azure Kubernetes description: Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application-+ Last updated 07/12/2019-+ # Deployment Center for Azure Kubernetes
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
<!-- LINKS - external --> [keda-scalers]: https://keda.sh/docs/scalers/ [keda-metrics]: https://keda.sh/docs/latest/operate/prometheus/
-[keda-event-docs]: https://keda.sh/docs/latest/operate/kubernetes-events/
+[keda-event-docs]: https://keda.sh/docs/2.7/operate/events/
[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Whilst AKS customers are able to route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic that is possible.
-Azure NAT Gateway allows up to 64,000 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
+Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
This article will show you how to create an AKS cluster with a Managed NAT Gateway for egress traffic.
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM provides the following capabilities and features:
- Define and execute fine grained access control policies for services. - Monitor and debug services using observability and insights into application metrics. - Integrate with external certificate management.-- Integrates with existing ingress solutions such as the [Azure Gateway Ingress Controller][agic], [NGINX][nginx], and [Contour][contour]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx].
+- Integrates with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Web Application Routing][web-app-routing]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx]. For more details on using Web Application Routing, which automatically integrates with OSM, see [Web Application Routing][web-app-routing].
## Example scenarios
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/ [ip-tables-redirection]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/ [global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
-[agic]: ../application-gateway/ingress-controller-overview.md
[nginx]: https://github.com/kubernetes/ingress-nginx [contour]: https://projectcontour.io/ [osm-ingress]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/ingress/ [osm-contour]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_contour [osm-nginx]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx
+[web-app-routing]: web-app-routing.md
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
The Open Service Mesh (OSM) add-on integrates with features provided by Azure as
## Ingress
-Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with either [NGINX ingress][osm-nginx] or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, are not covered by the [AKS support policy][aks-support-policy].
+Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, are not covered by the [AKS support policy][aks-support-policy].
Using [Azure Gateway Ingress Controller (AGIC)][agic] for ingress with OSM is not supported and not recommended.
OSM has several types of certificates it uses to operate on your AKS cluster. OS
[osm-cert-manager]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-cert-manager [open-source-integrations]: open-service-mesh-integrations.md#additional-open-source-integrations [osm-traffic-management-example]: https://github.com/MicrosoftDocs/azure-docs/pull/81085/files
-[osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer
+[osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer
+[web-app-routing]: web-app-routing.md
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
az keyvault secret set --vault-name MyAKSGMSAVault --name "GMSADomainUserCred" -
> [!NOTE] > Use the Fully Qualified Domain Name for the Domain rather than the Partially Qualified Domain Name that may be used on internal networks.
+>
+> The above command escapes the `value` parameter for running the Azure CLI on a Linux shell. When running the Azure CLI command on Windows PowerShell, you don't need to escape characters in the `value` parameter.
## Optional: Use a custom VNET with custom DNS
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use KMS etcd encryption in Azure Kubernetes Service (AKS) (Preview)
description: Learn how to use kms etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 04/11/2022 Last updated : 06/06/2022
The following limitations apply when you integrate KMS etcd encryption with AKS:
* Changing of key ID, including key name and key version. * Deletion of the key, Key Vault, or the associated identity. * KMS etcd encryption doesn't work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity isn't available until cluster creation, thus there's a cycle dependency.
-* Using Azure Key Vault with PrivateLink enabled.
* Using more than 2000 secrets in a cluster.
-* Managed HSM Support
* Bring your own (BYO) Azure Key Vault from another tenant. - ## Create a KeyVault and key > [!WARNING]
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
Test the operation in the Azure portal. You can also test it in the **Developer
This section shows how to add a wildcard operation. A wildcard operation lets you pass an arbitrary value with an API request. Instead of creating separate GET operations as shown in the previous sections, you could create a wildcard GET operation.
+> [!CAUTION]
+> Use care when configuring a wildcard operation. This configuration may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#improper-assets-management).
+ ### Add the operation 1. Select the API you created in the previous step.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
Previously updated : 03/04/2022 Last updated : 06/03/2022
This article provides a reference for API Management access restriction policies
## <a name="AccessRestrictionPolicies"></a> Access restriction policies - [Check HTTP header](#CheckHTTPHeader) - Enforces existence and/or value of an HTTP header.
+- [Get authorization context](#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
- [Limit call rate by subscription](#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis. - [Limit call rate by key](#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis. - [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.
+- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header or a specified query parameter.
- [Validate client certificate](#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. > [!TIP]
Use the `check-header` policy to enforce that a request has a specified HTTP hea
| -- | - | -- | - | | failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A | | failed-check-httpcode | HTTP Status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
-| header-name | The name of the HTTP Header to check. | Yes | N/A |
+| header-name | The name of the HTTP header to check. | Yes | N/A |
| ignore-case | Can be set to True or False. If set to True case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A | ### Usage
This policy can be used in the following policy [sections](./api-management-howt
- **Policy scopes:** all scopes
+## <a name="GetAuthorizationContext"></a> Get authorization context
+
+Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
+
+The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
+
+If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be https://azure-api.net/authorization-manager.
+++
+### Policy statement
+
+```xml
+<get-authorization-context
+ provider-id="authorization provider id"
+ authorization-id="authorization id"
+ context-variable-name="variable name"
+ identity-type="managed | jwt"
+ identity="JWT bearer token"
+ ignore-error="true | false" />
+```
+
+### Examples
+
+#### Example 1: Get token back
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+#### Example 2: Get token back with dynamically set attributes
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationProviderId"))"
+ authorization-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationId"))" context-variable-name="auth-context"
+ ignore-error="false"
+ identity-type="managed" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+#### Example 3: Attach the token to the backend call
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ ignore-error="false" />
+<!-- Attach the token to the backend call -->
+<set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+</set-header>
+```
+
+#### Example 4: Get token from incoming request and return token
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="jwt"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+### Elements
+
+| Name | Description | Required |
+| -- | - | -- |
+| get-authorization-context | Root element. | Yes |
+
+### Attributes
+
+| Name | Description | Required | Default |
+|||||
+| provider-id | The authorization provider resource identifier. | Yes | |
+| authorization-id | The authorization resource identifier. | Yes | |
+| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | |
+| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed |
+| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: https://azure-api.net/authorization-manager <br> - `oid`: Permission object id <br> - `tid`: Permission tenant id | No | |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
+
+### Authorization object
+
+The Authorization context variable receives an object of type `Authorization`.
+
+```c#
+class Authorization
+{
+ public string AccessToken { get; }
+ public IReadOnlyDictionary<string, object> Claims { get; }
+}
+```
+
+| Property Name | Description |
+| -- | -- |
+| AccessToken | Bearer access token to authorize a backend HTTP request. |
+| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+
+- **Policy scopes:** all scopes
++ ## <a name="LimitCallRate"></a> Limit call rate by subscription The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from either a specified HTTP Header or a specified query parameter.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from either a specified HTTP header or a specified query parameter.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
api-management Api Management Cross Domain Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-cross-domain-policies.md
Use the `cross-domain` policy to make the API accessible from Adobe Flash and Mi
|-|--|--| |cross-domain|Root element. Child elements must conform to the [Adobe cross-domain policy file specification](https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/CrossDomain_PolicyFile_Specification.pdf).|Yes|
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
This example demonstrates how to support [pre-flight requests](https://developer
|expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A| |header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or `expose-headers` if the section is present.|N/A|
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### Attributes |Name|Description|Required|Default|
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
Use this procedure to create and update a release.
The notes you specify appear in the change log. You can see them in the output of the previous command.
-1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later using the [az apim api release update](/cli/azure/apim/api/release#az_apim_api_release_update) command:
+1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later using the [az apim api release update](/cli/azure/apim/api/release#az-apim-api-release-update) command:
```azurecli az apim api release update --resource-group apim-hello-word-resource-group \
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
In this tutorial, you learn how to:
1. Select **Create** to create your new product.
+> [!CAUTION]
+> Use care when configuring a product that doesn't require a subscription. This configuration may be overly permissive and may make the product's APIs more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### [Azure CLI](#tab/azure-cli) To begin using Azure CLI:
You can specify various values for your product:
| `--subscriptions-limit` | Optionally, limit the count of multiple simultaneous subscriptions.| | `--legal-terms` | You can include the terms of use for the product, which subscribers must accept to use the product. |
+> [!CAUTION]
+> Use care when configuring a product that doesn't require a subscription. This configuration may be overly permissive and may make the product's APIs more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ To see your current products, use the [az apim product list](/cli/azure/apim/product#az-apim-product-list) command: ```azurecli
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
az apim nv delete --resource-group apim-hello-word-resource-group \
The examples in this section use the named values shown in the following table. | Name | Value | Secret |
-|--|-|--||
+|--|-|--|
| ContosoHeader | `TrackingId` | False | | ContosoHeaderValue | ΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇó | True | | ExpressionProperty | `@(DateTime.Now.ToString())` | False |
+| ContosoHeaderValue2 | `This is a header value.` | False |
To use a named value in a policy, place its display name inside a double pair of braces like `{{ContosoHeader}}`, as shown in the following example:
If you look at the outbound [API trace](api-management-howto-api-inspector.md) f
:::image type="content" source="media/api-management-howto-properties/api-management-api-inspector-trace.png" alt-text="API Inspector trace":::
+String interpolation can also be used with named values.
+
+```xml
+<set-header name="CustomHeader" exists-action="override">
+ <value>@($"The URL encoded value is {System.Net.WebUtility.UrlEncode("{{ContosoHeaderValue2}}")}")</value>
+</set-header>
+```
+
+The value for `CustomHeader` will be `The URL encoded value is This+is+a+header+value.`.
+ > [!CAUTION] > If a policy references a secret in Azure Key Vault, the value from the key vault will be visible to users who have access to subscriptions enabled for [API request tracing](api-management-howto-api-inspector.md).
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
## [Access restriction policies](api-management-access-restriction-policies.md) - [Check HTTP header](api-management-access-restriction-policies.md#CheckHTTPHeader) - Enforces existence and/or value of an HTTP Header.
+- [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
- [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis. - [Limit call rate by key](api-management-access-restriction-policies.md#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis. - [Restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
API publishers can [create subscriptions](api-management-howto-create-subscripti
By default, a developer can only access a product or API by using a subscription key. Under certain scenarios, API publishers might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured access to certain APIs, configuring another mechanism to secure client access is recommended.
+> [!CAUTION]
+> Use care when configuring a product or an API that doesn't require a subscription. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ To disable the subscription requirement using the portal: * **Disable requirement for product** - Disable **Requires subscription** on the **Settings** page of the product.
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
+
+ Title: Create and use authorization in Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization in Azure API Management. An authorization manages authorization tokens to OAuth 2.0 backend services. The example uses GitHub as an identity provider.
++++ Last updated : 06/03/2022+++
+# Configure and use an authorization
+
+In this article, you learn how to create an [authorization](authorizations-overview.md) (preview) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type will be used.
+
+Four steps are needed to set up an authorization with the authorization code grant type:
+
+1. Register an application in the identity provider (in this case, GitHub).
+1. Configure an authorization in API Management.
+1. Authorize with GitHub and configure access policies.
+1. Create an API in API Management and configure a policy.
+
+## Prerequisites
+
+- A GitHub account is required.
+- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Register an application in GitHub
+
+1. Sign in to GitHub.
+1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > Register a new application**.
+
+
+ :::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
+ 1. Enter an **Application name** and **Homepage URL** for the application.
+ 1. Optionally, add an **Application description**.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager-test.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
+1. Select **Register application**.
+1. In the **General** page, copy the **Client ID**, which you'll use in a later step.
+1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
+
+ :::image type="content" source="media/authorizations-how-to/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into Azure portal and go to your API Management instance.
+1. In the left menu, select **Authorizations** > **+ Create**.
+
+ :::image type="content" source="media/authorizations-how-to/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
+1. In the **Create authorization** window, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *github-01* |
+ |**Identity provider** | Select **GitHub** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client id** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Scope** | Set the scope to `User` |
+ |**Authorization name** | A name of your choice, such as *auth-01* |
+
+
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
+
+ :::image type="content" source="media/authorizations-how-to/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
+
+## Step 3: Authorize with GitHub and configure access policies
+
+1. Sign in to your GitHub account if you're prompted to do so.
+1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
+
+ :::image type="content" source="media/authorizations-how-to/consent-to-authorization.png" alt-text="Screenshot of consenting to authorize with Github.":::
+
+ After authorization, the browser is redirected to API Management and the window is closed. If prompted during redirection, select **Allow access**. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+
+1. Select **Managed identity** **+ Add members** and then select your subscription.
+1. In **Managed identity**, select **API Management service**, and then select the API Management instance that is used. Click **Select** and then **Complete**.
+
+ :::image type="content" source="media/authorizations-how-to/select-managed-identity.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+
+## Step 4: Create an API in API Management and configure a policy
+
+1. Sign into Azure portal and go to your API Management instance.
+1. In the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *github* |
+ |**Web service URL** | https://api.github.com/users/ |
+ |**API URL suffix** | *github* |
+
+2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getdata* |
+ |**URL** | /data |
+
+ :::image type="content" source="media/authorizations-how-to/add-operation.png" alt-text="Screenshot of adding a getdata operation to the API in the portal.":::
+
+1. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in step 2.3. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="github-01" authorization-id="auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ <rewrite-uri template="@(context.Request.Url.Query.GetValueOrDefault("username",""))" copy-unmatched-params="false" />
+ <set-header name="User-Agent" exists-action="override">
+ <value>API Management</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+
+ The policy to be used consists of four parts.
+
+ - Fetch an authorization token.
+ - Create an HTTP header with the fetched authorization token.
+ - Create an HTTP header with a `User-Agent` header (GitHub requirement). [Learn more](https://docs.github.com/rest/overview/resources-in-the-rest-api#user-agent-required)
+ - Because the incoming request to API Management will consist of a query parameter called *username*, add the username to the backend call.
+
+ > [!NOTE]
+ > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](api-management-access-restriction-policies.md#GetAuthorizationContext) about how to configure this policy.
+
+ :::image type="content" source="media/authorizations-how-to/policy-configuration-cropped.png" lightbox="media/authorizations-how-to/policy-configuration.png" alt-text="Screenshot of configuring policy in the portal.":::
+1. Test the API.
+ 1. On the **Test** tab, enter a query parameter with the name *username*.
+ 1. As value, enter the username that was used to sign into GitHub, or another valid GitHub username.
+ 1. Select **Send**.
+ :::image type="content" source="media/authorizations-how-to/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
+
+ A successful response returns user data from the GitHub API.
+
+## Next steps
+
+Learn more about [access restriction policies](api-management-access-restriction-policies.md).
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
+
+ Title: About OAuth 2.0 authorizations in Azure API Management | Microsoft Docs
+description: Learn about authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to APIs
+++ Last updated : 06/03/2022+++
+# Authorizations overview
+
+API Management authorizations (preview) simplify the process of managing authorization tokens to OAuth 2.0 backend services.
+By configuring any of the supported identity providers and creating an authorization using the standardized OAuth 2.0 flow, API Management can retrieve and refresh access tokens to be used inside of API management or sent back to a client.
+This feature enables APIs to be exposed with or without a subscription key, and the authorization to the backend service uses OAuth 2.0.
+
+Some example scenarios that will be possible through this feature are:
+
+- Citizen/low code developers using Power Apps or Power Automate can easily connect to SaaS providers that are using OAuth 2.0.
+- Unattended scenarios such as an Azure function using a timer trigger can utilize this feature to connect to a backend API using OAuth 2.0.
+- A marketing team in an enterprise company could use the same authorization for interacting with a social media platform using OAuth 2.0.
+- Exposing APIs in API Management as a custom connector in Logic Apps where the backend service requires OAuth 2.0 flow.
+- On behalf of a scenario where a service such as Dropbox or any other service protected by OAuth 2.0 flow is used by multiple clients.
+- Connect to different services that require OAuth 2.0 authorization using synthetic GraphQL in API Management.
+- Enterprise Application Integration (EAI) patterns using service-to-service authorization can use the client credentials grant type against backend APIs that use OAuth 2.0.
+- Single-page applications that only want to retrieve an access token to be used in a client's SDK against an API using OAuth 2.0.
+
+The feature consists of two parts, management and runtime:
+
+* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations.
++
+* The **runtime** part uses the [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
++
+ During the policy execution, access to the tokens is also validated using access policies.
++
+### Requirements
+
+- Managed system-assigned identity must be enabled for the API Management instance.
+- API Management instance must have outbound connectivity to internet on port `443` (HTTPS).
+
+### Limitations
+
+For public preview the following limitations exist:
+
+- Authorizations feature will be available in the Consumption tier in the coming weeks.
+- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.
+- Supported identity providers: Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify
+- Maximum configured number of authorization providers per API Management instance: 50
+- Maximum configured number of authorizations per authorization provider: 500
+- Maximum configured number of access policies per authorization: 100
+- Maximum requests per minute per authorization: 100
+- Authorization code PKCE flow with code challenge isn't supported.
+- Authorizations feature isn't supported on self-hosted gateways.
+- API documentation is not available yet. Please see [this](https://github.com/Azure/APIManagement-Authorizations) GitHub repository with samples.
+
+### Authorization providers
+
+Authorization provider configuration includes which identity provider and grant type are used. Each identity provider requires different configurations.
+
+* An authorization provider configuration can only have one grant type.
+* One authorization provider configuration can have multiple authorizations.
+
+The following identity providers are supported for public preview:
+
+- Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify
++
+With the Generic OAuth 2.0 provider, other identity providers that support the standards of OAuth 2.0 flow can be used.
++
+### Authorizations
+
+To use an authorization provider, at least one *authorization* is required. The process of configuring an authorization differs based on the used grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed.
+
+**Authorization code grant type**
+
+Authorization code grant type is bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Read more about Authorization code grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1).
+
+**Client credentials grant type**
+
+Client credentials grant type isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Read more about Client Credentials grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4).
++
+### Access policies
+Access policies determine which identities can use the authorization that the access policy is related to. The supported identities are managed identities, user identities, and service principals. The identities must belong to the same tenant as the API Management tenant.
+
+- **Managed identities** - System- or user-assigned identity for the API Management instance that is being used.
+- **User identities** - Users in the same tenant as the API Management instance.
+- **Service principals** - Applications in the same Azure AD tenant as the API Management instance.
+
+### Process flow for creating authorizations
+
+The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available. Please see [this](https://aka.ms/apimauthorizations/postmancollection) Postman collection.
+++
+1. Client sends a request to create an authorization provider.
+1. Authorization provider is created, and a response is sent back.
+1. Client sends a request to create an authorization.
+1. Authorization is created, and a response is sent back with the information that the authorization is not "connected".
+1. Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step.
+1. Response is returned with a login URL that should be used to start the consent flow.
+1. Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow.
+1. After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider.
+1. API Management uses the authorization code to fetch access and refresh tokens.
+1. API Management receives the tokens and encrypts them.
+1. API Management redirects to the provided URL from step 5.
+
+### Process flow for runtime
+
+The following image shows the process flow to fetch and store authorization and refresh tokens based on a configured authorization. After the tokens have been retrieved a call is made to the backend API.
++
+1. Client sends request to API Management instance.
+1. The policy [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) checks if the access token is valid for the current authorization.
+1. If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider.
+1. The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management.
+1. After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API.
+1. Response is returned to API Management.
+1. Response is returned to the client.
+
+### Error handling
+
+If acquiring the authorization context results in an error, the outcome depends on how the attribute `ignore-error` is configured in the policy `get-authorization-context`. If the value is set to `false` (default), an error with `500 Internal Server Error` will be returned. If the value is set to `true`, the error will be ignored and execution will proceed with the context variable set to `null`.
+
+If the value is set to `false`, and the on-error section in the policy is configured, the error will be available in the property `context.LastError`. By using the on-error section, the error that is sent back to the client can be adjusted. Errors from API Management can be caught using standard Azure alerts. Read more about [handling errors in policies](api-management-error-handling-policies.md).
+
+### Authorizations FAQ
+
+##### How can I provide feedback and influence the roadmap for this feature?
+
+Please use [this](https://aka.ms/apimauthorizations/feedback) form to provide feedback.
+
+##### How are the tokens stored in API Management?
+
+The access token and other secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data; those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
+
+##### When are the access tokens refreshed?
+
+When the policy `get-authorization-context` is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
+
+##### What happens if the client secret expires at the identity provider?
+At runtime API Management can't fetch new tokens, and an error will occur.
+
+* If the authorization is of type authorization code, the client secret needs to be updated on authorization provider level.
+
+* If the authorization is of type client credentials, the client secret needs to be updated on authorizations level.
+
+##### Is this feature supported using API Management running inside a VNet?
+
+Yes, as long as API Management gateway has outbound internet connectivity on port `443`.
+
+##### What happens when an authorization provider is deleted?
+
+All underlying authorizations and access policies are also deleted.
+
+##### Are the access tokens cached by API Management?
+
+The access token is cached by the API management until 3 minutes before the token expiration time.
+
+##### What grant types are supported?
+
+For public preview, the Azure AD identity provider supports authorization code and client credentials.
+
+The other identity providers support authorization code. After public preview, more identity providers and grant types will be added.
+
+### Next steps
+
+- Learn how to [configure and use an authorization](authorizations-how-to.md).
+- See [reference](authorizations-reference.md) for supported identity providers in authorizations.
+- Use [policies]() together with authorizations.
+- Authorizations [samples](https://github.com/Azure/APIManagement-Authorizations) GitHub repository.
+- Learn more about OAuth 2.0:
+
+ * [OAuth 2.0 overview](https://aaronparecki.com/oauth-2-simplified/)
+ * [OAuth 2.0 specification](https://oauth.net/2/)
api-management Authorizations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-reference.md
+
+ Title: Reference for OAuth 2.0 authorizations - Azure API Management | Microsoft Docs
+description: Reference for identity providers supported in authorizations in Azure API Management. API Management authorizations manage OAuth 2.0 authorization tokens to APIs.
+++ Last updated : 05/02/2022+++
+# Authorizations reference
+This article is a reference for the supported identity providers in API Management [authorizations](authorizations-overview.md) (preview) and their configuration options.
+
+## Azure Active Directory
++
+**Supported grant types**: authorization code and client credentials
++
+### Authorization provider - Authorization code grant type
+
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
+| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
+| Resource URL | Yes | The resource to get authorization for. | |
+| Scopes | No | Scopes used for the authorization. Multiple scopes could be defined separate with a space, for example, "User.Read User.ReadBasic.All" | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+
+
+### Authorization provider - Client credentials code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
+| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
+| Resource URL | Yes | The resource to get authorization for. | |
++
+### Authorization - Client credentials code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+
+
+
+## Google, LinkedIn, Spotify, Dropbox, GitHub
+
+**Supported grant types**: authorization code
+
+### Authorization provider - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+
+
+## Generic OAuth 2
+
+**Supported grant types**: authorization code
++
+### Authorization provider - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Authorization URL | No | The authorization endpoint URL. | |
+| Token URL | No | The token endpoint URL. | |
+| Refresh URL | No | The token refresh endpoint URL. | |
+| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+## Next steps
+
+Learn more about [authorizations](authorizations-overview.md) and how to [create and use authorizations](authorizations-how-to.md)
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
+
+ Title: Mitigate OWASP API security top 10 in Azure API Management
+description: Learn how to protect against common API-based vulnerabilities, as identified by the OWASP API Security Top 10 threats, using Azure API Management.
+++ Last updated : 05/31/2022+++
+# Recommendations to mitigate OWASP API Security Top 10 threats using API Management
+
+The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Foundation works to improve software security through its community-led open source software projects, hundreds of chapters worldwide, tens of thousands of members, and by hosting local and global conferences.
+
+The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
+
+## Broken object level authorization
+
+API objects that aren't protected with the appropriate level of authorization may be vulnerable to data leaks and unauthorized data manipulation through weak object access identifiers. For example, an attacker could exploit an integer object identifier, which can be iterated.
+
+More information about this threat: [API1:2019 Broken Object Level Authorization](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa1-broken-object-level-authorization.md)
+
+### Recommendations
+
+* The best place to implement object level authorization is within the backend API itself. At the backend, the correct authorization decisions can be made at the request (or object) level, where applicable, using logic applicable to the domain and API. Consider scenarios where a given request may yield differing levels of detail in the response, depending on the requestor's permissions and authorization.
+
+* If a current vulnerable API can't be changed at the backend, then API Management could be used as a fallback. For example:
+
+ * Use a custom policy to implement object-level authorization, if it's not implemented in the backend.
+
+ * Implement a custom policy to map identifiers from request to backend and from backend to client, so that internal identifiers aren't exposed.
+
+ In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send request](api-management-advanced-policies.md#SendRequest) policy.
+
+* For GraphQL scenarios, enforce object-level authorization through the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy, using the `authorize` element.
+
+## Broken user authentication
+
+Authentication mechanisms are often implemented incorrectly or missing, allowing attackers to exploit implementation flaws to access data.
+
+More information about this threat: [API2:2019 Broken User Authentication](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa2-broken-user-authentication.md)
+
+### Recommendations
+
+Use API Management for user authentication and authorization:
+
+* **Authentication** - API Management supports the following [authentication methods](api-management-authentication-policies.md):
+
+ * [Basic authentication](api-management-authentication-policies.md#Basic) policy - Username and password credentials.
+
+ * [Subscription key](api-management-subscriptions.md) - A subscription key provides a similar level of security as basic authentication and may not be sufficient alone. If the subscription key is compromised, an attacker may get unlimited access to the system.
+
+ * [Client certificate](api-management-authentication-policies.md#ClientCertificate) policy - Using client certificates is more secure than basic credentials or subscription key, but it doesn't allow the flexibility provided by token-based authorization protocols such as OAuth 2.0.
+
+* **Authorization** - API Management supports a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the validity of an incoming OAuth 2.0 JWT access token based on information obtained from the OAuth identity provider's metadata endpoint. Configure the policy to check relevant token claims, audience, and expiration time. Learn more about protecting an API using [OAuth 2.0 authorization and Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+
+More recommendations:
+
+* Use [access restriction policies](api-management-access-restriction-policies.md) in API Management to increase security. For example, [call rate limiting](api-management-access-restriction-policies.md#LimitCallRate) slows down bad actors using brute force attacks to compromise credentials.
+
+* APIs should use TLS/SSL (transport security) to protect the credentials or tokens. Credentials and tokens should be sent in request headers and not as query parameters.
+
+* In the API Management [developer portal](api-management-howto-developer-portal.md), configure [Azure Active Directory](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) as the identity provider to increase the account security. The developer portal uses CAPTCHA to mitigate brute force attacks.
+
+### Related information
+
+* [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md)
+
+## Excessive data exposure
+
+Good API interface design is deceptively challenging. Often, particularly with legacy APIs that have evolved over time, the request and response interfaces contain more data fields than the consuming applications require.
+
+A bad actor could attempt to access the API directly (perhaps by replaying a valid request), or sniff the traffic between server and API. Analysis of the API actions and the data available could yield sensitive data to the attacker, which isn't surfaced to, or used by, the frontend application.
+
+More information about this threat: [API3:2019 Excessive Data Exposure](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa3-excessive-data-exposure.md)
+
+### Recommendations
+
+* The best approach to mitigating this vulnerability is to ensure that the external interfaces defined at the backend API are designed carefully and, ideally, independently of the data persistence. They should contain only the fields required by consumers of the API. APIs should be reviewed frequently, and legacy fields deprecated, then removed.
+
+ In API Management, use:
+ * [Revisions](api-management-revisions.md) to gracefully control nonbreaking changes, for example, the addition of a field to an interface. You may use revisions along with a versioning implementation at the backend.
+
+ * [Versions](api-management-versions.md) for breaking changes, for example, the removal of a field from an interface.
+
+* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](transform-api.md) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
+
+* [Response content validation](validation-policies.md#validate-content) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
+
+* Use the [validate status code](validation-policies.md#validate-status-code) policy to block responses with errors undefined in the API schema.
+
+* Use the [validate headers](validation-policies.md#validate-headers) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set header](api-management-transformation-policies.md#SetHTTPheader) policy.
+
+* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
+
+## Lack of resources and rate limiting
+
+Lack of rate limiting may lead to data exfiltration or successful DDoS attacks on backend services, causing an outage for all consumers.
+
+More information about this threat: [API4:2019 Lack of resources and rate limiting](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa4-lack-of-resources-and-rate-limiting.md)
+
+### Recommendations
+
+* Use [rate limit](api-management-access-restriction-policies.md#LimitCallRate) (short-term) and [quota limit](api-management-access-restriction-policies.md#SetUsageQuota) (long-term) policies to control the allowed number of API calls or bandwidth per consumer.
+
+* Define strict request object definitions and their properties in the OpenAPI definition. For example, define the max value for paging integers, maxLength and regular expression (regex) for strings. Enforce those schemas with the [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies in API Management.
+
+* Enforce maximum size of the request with the [validate content](validation-policies.md#validate-content) policy.
+
+* Optimize performance with [built-in caching](api-management-howto-cache.md), thus reducing the consumption of CPU, memory, and networking resources for certain operations.
+
+* Enforce authentication for API calls (see [Broken user authentication](#broken-user-authentication)). Revoke access for abusive users. For example, deactivate the subscription key, block the IP address with the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy, or reject requests for a certain user claim from a [JWT token](api-management-access-restriction-policies.md#ValidateJWT).
+
+* Apply a [CORS](api-management-cross-domain-policies.md#CORS) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, donΓÇÖt use wildcard values (`*`) in the CORS policy.
+
+* Minimize the time it takes a backend service to respond. The longer the backend service takes to respond, the longer the connection is occupied in API Management, therefore reducing the number of requests that can be served in a given timeframe.
+
+ * Define `timeout` in the [forward request](api-management-advanced-policies.md#ForwardRequest) policy.
+
+ * Use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
+
+ * Limit the number of parallel backend connections with the [limit concurrency](api-management-advanced-policies.md#LimitConcurrency) policy.
+
+* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Azure DDoS Protection Service](../ddos-protection/ddos-protection-overview.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
+
+## Broken function level authorization
+
+Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions lead to authorization flaws. By exploiting these issues, attackers gain access to other usersΓÇÖ resources or administrative functions.
+
+More information about this threat: [API5:2019 Broken function level authorization](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa5-broken-function-level-authorization.md)
+
+### Recommendations
+
+* By default, protect all API endpoints in API Management with [subscription keys](api-management-subscriptions.md).
+
+* Define a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy and enforce required token claims. If certain operations require stricter claims enforcement, define extra `validate-jwt` policies for those operations only.
+
+* Use an Azure virtual network or Private Link to hide API endpoints from the internet. Learn more about [virtual network options](virtual-network-concepts.md) with API Management.
+
+* Don't define [wildcard API operations](add-api-manually.md#add-and-test-a-wildcard-operation) (that is, "catch-all" APIs with `*` as the path). Ensure that API Management only serves requests for explicitly defined endpoints, and requests to undefined endpoints are rejected.
+
+* Don't publish APIs with [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
+
+## Mass assignment
+
+If an API offers more fields than the client requires for a given action, an attacker may inject excessive properties to perform unauthorized operations on data. Attackers may discover undocumented properties by inspecting the format of requests and responses or other APIs, or guessing them. This vulnerability is especially applicable if you donΓÇÖt use strongly typed programming languages.
+
+More information about this threat: [API6:2019 Mass assignment](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa6-mass-assignment.md)
+
+### Recommendations
+
+* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](/api-management-versions.md) in API Management.
+
+* Precisely define XML and JSON contracts in the API schema and use [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
+
+* If the backend interface can't be changed, use [transformation policies](transform-api.md) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
+
+## Security misconfiguration
+
+Attackers may attempt to exploit security misconfiguration vulnerabilities such as:
+
+* Missing security hardening
+* Unnecessary enabled features
+* Network connections unnecessarily open to the internet
+* Use of weak protocols or ciphers
+* Other settings or endpoints that may allow unauthorized access to the system
+
+More information about this threat: [API7:2019 Security misconfiguration](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa7-security-misconfiguration.md)
+
+### Recommendations
+
+* Correctly configure [gateway TLS](api-management-howto-manage-protocols-ciphers.MD). Don't use vulnerable protocols (for example, TLS 1.0, 1.1) or ciphers.
+
+* Configure APIs to accept encrypted traffic only, for example through HTTPS or WSS protocols.
+
+* Consider deploying API Management behind a [private endpoint](private-endpoint.md) or attached to a [virtual network deployed in internal mode](api-management-using-with-internal-vnet.md). In internal networks, access can be controlled from within the private network (via firewall or network security groups) and from the internet (via a reverse proxy).
+
+* Use Azure API Management policies:
+
+ * Always inherit parent policies through the `<base>` tag.
+
+ * When using OAuth 2.0, configure and test the [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the existence and validity of the JWT token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings.
+
+ * Configure the [CORS](api-management-cross-domain-policies.md#CORS) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
+
+ * Set [validation policies](validation-policies.md) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
+
+ * If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy. Ensure that it uses an allowlist, not a blocklist.
+
+ * If client certificates are used between caller and API Management, use the [validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
+
+ * Client certificates (mutual TLS) can also be applied between API Management and the backend. The backend should:
+
+ * Have authorization credentials configured
+
+ * Validate the certificate chain where applicable
+
+ * Validate the certificate name where applicable
+
+* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
+
+* Don't store secrets in policy files or in source control. Always use API Management [named values](api-management-howto-properties.md) or fetch the secrets at runtime using custom policy expressions.
+
+ * Named values should be [integrated with Key Vault](api-management-howto-properties.md#key-vault-secrets) or encrypted within API Management by marking them "secret". Never store secrets in plain-text named values.
+
+* Publish APIs through [products](api-management-howto-add-products.md), which require subscriptions. Don't use [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
+
+* Use Key Vault integration to manage all certificates ΓÇô this centralizes certificate management and can help to ease operations management tasks such as certificate renewal or revocation.
+
+* When using the [self-hosted-gateway](self-hosted-gateway-overview.md), ensure that there's a process in place to update the image to the latest version periodically.
+
+* Represent backend services as [backend entities](backends.md). Configure authorization credentials, certificate chain validation, and certificate name validation where applicable.
+
+* When using the [developer portal](api-management-howto-developer-portal.md):
+
+ * If you choose to [self-host](developer-portal-self-host.md) the developer portal, ensure there's a process in place to periodically update the self-hosted portal to the latest version. Updates for the default managed version are automatic.
+
+ * Use [Azure Active Directory (Azure AD)](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) for user sign-up and sign-in. Disable the default username and password authentication, which is less secure.
+
+ * Assign [user groups](api-management-howto-create-groups.md#-associate-a-group-with-a-product) to products, to control the visibility of APIs in the portal.
+
+* Use [Azure Policy](security-controls-policy.md) to enforce API Management resource-level configuration and role-based access control (RBAC) permissions to control resource access. Grant minimum required privileges to every user.
+
+* Use a [DevOps process](devops-api-development-templates.md) and infrastructure-as-code approach outside of a development environment to ensure consistency of API Management content and configuration changes and to minimize human errors.
+
+* Don't use any deprecated features.
+
+## Injection
+
+Any endpoint accepting user data is potentially vulnerable to an injection exploit. Examples include, but aren't limited to:
+
+* [Command injection](https://owasp.org/www-community/attacks/Command_Injection), where a bad actor attempts to alter the API request to execute commands on the operating system hosting the API
+
+* [SQL injection](https://owasp.org/www-community/attacks/SQL_Injection), where a bad actor attempts to alter the API request to execute commands and queries against the database an API depends on
+
+More information about this threat: [API8:2019 Injection](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa8-injection.md)
+
+### Recommendations
+
+* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](../frontdoor/front-door-overview.md).
+
+ > [!IMPORTANT]
+ > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](api-management-access-restriction-policies.md#RestrictCallerIPs), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
+
+* Use schema and parameter [validation](validation-policies.md) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
+
+ The schema supplied with the API definition should have a regex pattern constraint applied to vulnerable fields. Each regex should be tested to ensure that it constrains the field sufficiently to mitigate common injection attempts.
+
+### Related information
+
+* [Deployment stamps pattern with Azure Front Door and API Management](/azure/architecture/patterns/deployment-stamp)
+
+* [Deploy Azure API Management with Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md)
+
+## Improper assets management
+
+Vulnerabilities related to improper assets management include:
+
+* Lack of proper API documentation or ownership information
+
+* Excessive numbers of older API versions, which may be missing security fixes
+
+More information about this threat: [API9:2019 Improper assets management](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa9-improper-assets-management.md)
+
+### Recommendations
+
+* Use a well-defined [OpenAPI specification](https://swagger.io/specification/) as the source for importing REST APIs. The specification allows encapsulation of the API definition, including self-documenting metadata.
+
+ * Use API interfaces with precise paths, data schemas, headers, query parameters, and status codes. Avoid [wildcard operations](add-api-manually.md#add-and-test-a-wildcard-operation). Provide descriptions for each API and operation and include contact and license information.
+
+ * Avoid endpoints that donΓÇÖt directly contribute to the business objective. They unnecessarily increase the attack surface area and make it harder to evolve the API.
+
+* Use [revisions](api-management-revisions.md) and [versions](api-management-versions.md) in API Management to govern and control the API endpoints. Have a strong backend versioning strategy and commit to a maximum number of supported API versions (for example, 2 or 3 prior versions). Plan to quickly deprecate and ultimately remove older, often less secure, API versions.
+
+* Use an API Management instance per environment (such as development, test, and production). Ensure that each API Management instance connects to its dependencies in the same environment. For example, in the test environment, the test API Management resource should connect to a test Azure Key Vault resource and the test versions of backend services. Use [DevOps automation and infrastructure-as-code practices](devops-api-development-templates.md) to help maintain consistency and accuracy between environments and reduce human errors.
+
+* Use tags to organize APIs and products and group them for publishing.
+
+* Publish APIs for consumption through the built-in [developer portal](api-management-howto-developer-portal.md). Make sure the API documentation is up-to-date.
+
+* Discover undocumented or unmanaged APIs and expose them through API Management for better control.
+
+## Insufficient logging and monitoring
+
+Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, and extract or destroy data. Most breach studies demonstrate that the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.
+
+More information about this threat: [API10:2019 Insufficient logging and monitoring](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xaa-insufficient-logging-monitoring.md)
+
+### Recommendations
+
+* Understand [observability options](observability.md) in Azure API Management and [best practices](/azure/architecture/best-practices/monitoring) for monitoring in Azure.
+
+* Monitor API traffic with [Azure Monitor](api-management-howto-use-azure-monitor.md).
+
+* Log to [Application Insights](api-management-howto-app-insights.md) for debugging purposes. Correlate [transactions in Application Insights](../azure-monitor/app/transaction-diagnostics.md) between API Management and the backend API to [trace them end-to-end](../azure-monitor/app/correlation.md).
+
+* If needed, forward custom events to [Event Hubs](api-management-howto-log-event-hubs.md).
+
+* Set alerts in Azure Monitor and Application Insights - for example, for the [capacity metric](api-management-howto-autoscale.md) or for excessive requests or bandwidth transfer.
+
+* Use the [emit metrics](api-management-advanced-policies.md#emit-metrics) policy for custom metrics.
+
+* Use the Azure Activity log for tracking activity in the service.
+
+* Use custom events in [Azure Application Insights](../azure-monitor/app/api-custom-events-metrics.md) and [Azure Monitor](../azure-monitor/app/custom-data-correlation.md) as needed.
+
+* Configure [OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry) for [self-hosted gateways](self-hosted-gateway-overview.md) on Kubernetes.
+
+## Next steps
+
+* [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline)
+* [Security controls by Azure policy](security-controls-policy.md)
+* [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
First, enable Azure Active Directory authentication to the Azure database by ass
1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
-1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable.
```azurecli-interactive azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure SQL Database](#tab/sqldatabase)
-3. Add this Azure AD user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az-sql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure Database for MySQL](#tab/mysql)
-3. Add this Azure AD user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az_mysql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az-mysql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az mysql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure Database for PostgreSQL](#tab/postgresql)
-3. Add this Azure AD user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az_postgres_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az-postgres-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az postgres server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
Next, you configure your App Service app to connect to SQL Database with a managed identity.
-1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az_webapp_identity_assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
+1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az-webapp-identity-assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
# [System-assigned identity](#tab/systemassigned/sqldatabase)
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
recommendations: false
# Compose custom models v3.0 | Preview > [!NOTE]
-> This how-to guide references Form Recognizer v3.0 (preview). To use Form Recognizer v2.1 (GA), see [Compose custom models v2.1.](compose-custom-models.md).
+> This how-to guide references Form Recognizer v3.0 (preview). To use Form Recognizer v2.1 (GA), see [Compose custom models v2.1](compose-custom-models.md).
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign up to 100 trained custom models to a single composed model. When analyze documents with a composed model, Form Recognizer will first classify the form you submitted, then choose the best matching assigned model, and return results the results.
+A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 100 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-To learn more, see [Composed custom models](concept-composed-models.md)
+To learn more, see [Composed custom models](concept-composed-models.md).
In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
In this article, you'll learn how to create and use composed custom models to an
To get started, you'll need the following resources:
-* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
To get started, you'll need the following resources:
## Create your custom models
-First, you'll need to a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
+First, you'll need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
* [**Assemble your training dataset**](#assemble-your-training-dataset) * [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.mi
### [Form Recognizer Studio](#tab/studio)
-To create custom models, you start with configuring your project:
+To create custom models, start with configuring your project:
-1. From the Studio home, select the [Custom form project](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to open the Custom form home page.
+1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
See [Form Recognizer Studio: labeling as tables](quickstarts/try-v3-form-recogni
### [REST API](#tab/rest)
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
Training with labels leads to better performance in some scenarios. To train wit
|Language |Method| |--|--| |**C#**|[**StartBuildModel**](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startbuildmodel?view=azure-dotnet-preview#azure-ai-formrecognizer-documentanalysis-documentmodeladministrationclient-startbuildmodel&preserve-view=true)|
-|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-preview#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)| | **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python-preview#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
When you train models using the [**Form Recognizer Studio**](https://formrecogni
1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results. -- #### Analyze documents The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
The custom model **Analyze** operation requires you to provide the `modelID` in
#### Manage your composed models You can manage your custom models throughout life cycles:
-
+ * Test and validate new documents. * Download your model to use in your applications. * Delete your model when its lifecycle is complete.
The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services
#### Analyze documents
-You can make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) request using a unique model name in the request parameters.
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
:::image type="content" source="media/custom-model-analyze-request.png" alt-text="Screenshot of a custom model request URL.":::
You can use the programming language of your choice to create a composed model:
#### Analyze documents
-Once you have built your composed model, it can be used to analyze forms and documents You can use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
+Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
|Programming language| Code sample | |--|--|
Once you have built your composed model, it can be used to analyze forms and doc
## Manage your composed models
-Custom models can be managed throughout their lifecycle. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
+You can manage a custom models at each stage in its life cycles. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
|Programming language| Code sample | |--|--|
Custom models can be managed throughout their lifecycle. You can view a list of
## Next steps
-Try one of our quickstarts to get started using Form Recognizer preview
+Try one of our Form Recognizer quickstarts:
> [!div class="nextstepaction"] > [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md)
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
In this article, you'll learn how to create Form Recognizer custom and composed
## Sample Labeling tool
-You can see how data is extracted from custom forms by trying our Sample Labeling tool. You'll need the following resources:
+Try extracting data from custom forms using our Sample Labeling tool. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You can see how data is extracted from custom forms by trying our Sample Labelin
In the Form Recognizer UI: 1. Select **Use Custom to train a model with labels and get key value pairs**.
-
- :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot: FOTT tool select custom option.":::
+
+ :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
1. In the next window, select **New project**:
- :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot: FOTT tool select new project.":::
+ :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
## Create your models
You [train your model](./quickstarts/try-sdk-rest-api.md#train-a-custom-model)
When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
[Get started with Train with labels](label-tool.md)
When you train models using the [**Form Recognizer Sample Labeling tool**](https
### [**REST API**](#tab/rest-api)
-The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model), will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
+The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model) will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
:::image type="content" source="media/model-id.png" alt-text="Screenshot: the returned location header containing the model ID.":::
The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language
#### Compose your custom models
-After you have gathered your custom models corresponding to a single form type, you can compose them into a single model.
+After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
### [**Form Recognizer Sample Labeling tool**](#tab/fott)
The **Sample Labeling tool** enables you to quickly get started training models
After you have completed training, compose your models as follows:
-1. On the left rail menu, select the **Model Compose icon** (merging arrow).
+1. On the left rail menu, select the **Model Compose** icon (merging arrow).
1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
After you have completed training, compose your models as follows:
When the operation completes, your newly composed model will appear in the list.
- :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot: model compose window." lightbox="media/custom-model-compose-expanded.png":::
+ :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="media/custom-model-compose-expanded.png":::
### [**REST API**](#tab/rest-api)
Use the programming language code of your choice to create a composed model that
### [**Form Recognizer Sample Labeling tool**](#tab/fott)
-1. On the tool's left-pane menu, select the **Analyze icon** (lightbulb).
+1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
1. Choose a local file or image URL to analyze.
Using the programming language of your choice to analyze a form or document with
-Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that were not part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
+Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
## Manage your custom models You can [manage your custom models](./quickstarts/try-sdk-rest-api.md#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetCustomModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetCustomModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/DeleteCustomModel) from your account.
-Great! You have learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
+Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
## Next steps
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Title: Form Recognizer business card model
-description: Concepts encompassing data extraction and analysis using prebuilt business card model
+description: Concepts related to data extraction and analysis using the prebuilt business card model.
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false- <!-- markdownlint-disable MD033 -->
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-businessCard**|
+|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-businessCard**|
### Try Form Recognizer
You'll need a business card document. You can use our [sample business card docu
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li></ul> | Autodetected |
+|Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
## Field extraction
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 03/25/2022 Last updated : 06/06/2022 recommendations: false
recommendations: false
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-01-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
+* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-06-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results. * For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates. * The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
- |Custom model type | API Version |Custom form 2021-01-30-preview (v3.0)| Custom document 2021-01-30-preview(v3.0) | Custom form GA version (v2.1) or earlier|
+ |Custom model type | API Version |Custom form 2021-06-30-preview (v3.0)| Custom document 2021-06-30-preview(v3.0) | Custom form GA version (v2.1) or earlier|
|--|--|--|--|--|
-|**Custom template** (updated custom form)| 2021-01-30-preview | &#10033;| Γ£ô | X |
-|**Custom neural**| trained with current API version (2021-01-30-preview) |Γ£ô |Γ£ô | X |
+|**Custom template** (updated custom form)| 2021-06-30-preview | &#10033;| Γ£ô | X |
+|**Custom neural**| trained with current API version (2021-06-30-preview) |Γ£ô |Γ£ô | X |
|**Custom form**| Custom form GA version (v2.1) or earlier | X | X| ✓| **Table symbols**: ✔—supported; **X—not supported; ✱—unsupported for this API version, but will be supported in a future API version.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Title: Form Recognizer custom neural model
-description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents
+description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents.
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Custom neural models or neural models are a deep learned model that combines lay
|semi-structured | invoices, purchase orders | |unstructured | contracts, letters|
-Custom neural models share the same labeling format and strategy as custom template models. Currently custom neural models only support a subset of the field types supported by custom template models.
+Custom neural models share the same labeling format and strategy as [custom template](concept-custom-template.md) models. Currently custom neural models only support a subset of the field types supported by custom template models.
## Model capabilities Custom neural models currently only support key-value pairs and selection marks, future releases will include support for structured fields (tables) and signature.
-| Form fields | Selection marks | Tables | Signature | Region |
-|--|--|--|--|--|
-| Supported| Supported | Unsupported | Unsupported | Unsupported |
+| Form fields | Selection marks | Tabular fields | Signature | Region |
+|:--:|:--:|:--:|:--:|:--:|
+| Supported | Supported | Supported | Unsupported | Unsupported |
+
+## Tabular fields
+
+With the release of API version **2022-06-30-preview**, custom neural models will support tabular fields (tables):
+
+* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
+
+Tabular fields support **cross page tables** by default:
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
## Supported regions
-In public preview custom neural models can only be trained in select Azure regions.
+For the **2022-06-30-preview**, custom neural models can only be trained in the following Azure regions:
* AustraliaEast * BrazilSouth
In public preview custom neural models can only be trained in select Azure regio
* WestUS2 * WestUS3
-You can copy a model trained in one of the regions listed above to any other region for use.
+> [!TIP]
+> You can copy a model trained in one of the select regions listed above to **any other region** and use it accordingly.
## Best practices
-Custom neural models differ from custom template models in a few different ways.
+Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model and test to determine if it supports your functional needs.
-### Dealing with variations
+### Dealing with variations
Custom neural models can generalize across different formats of a single document type. As a best practice, create a single model for all variations of a document type. Add at least five labeled samples for each of the different variations to the training dataset.
Custom neural models are only available in the [v3 API](v3-migration-guide.md).
| Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom document | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom document | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": "string",
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Custom template models share the same labeling format and strategy as custom neu
## Model capabilities
-Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
+Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
-| Form fields | Selection marks | Structured fields (Tables) | Signature | Selected regions |
-|--|--|--|--|--|
+| Form fields | Selection marks | Tabular fields (Tables) | Signature | Selected regions |
+|:--:|:--:|:--:|:--:|:--:|
| Supported| Supported | Supported | Preview | Supported |
-## Dealing with variations
+## Tabular fields
-Template models rely on a defined visual template, changes to the template will result in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. When dealing with subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
+With the release of API version **2022-06-30-preview**, custom template models will support tabular fields (tables):
+
+* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
+
+Tabular fields support **cross page tables** by default:
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
+
+## Dealing with variations
+
+Template models rely on a defined visual template, changes to the template will result in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. For subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
## Training a model
Template models are available generally [v2.1 API](https://westus.dev.cognitive.
| Model | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom template (preview) | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom template (preview) | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
| Custom template | [Form Recognizer 2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)| On the v3 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": "string",
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 03/10/2022 Last updated : 06/06/2022 recommendations: false
Your training set will consist of structured documents where the formatting and
### Custom neural model
-The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
## Build mode
The following tools are supported by Form Recognizer v3.0:
### Try Form Recognizer
-See how data is extracted from your specific or unique documents by using custom models. You need the following resources:
+Try extracting data from your specific or unique documents using custom models. You need the following resources:
* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/). * A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
The following table describes the features available with the associated tools a
| Document type | REST API | SDK | Label and Test Models| |--|--|--|--| | Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom template 3.0 | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
> [!NOTE]
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
* **Custom model API (v3.0)**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not. * [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the preview version in your applications and workflows.
-* [REST API (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument): This API shows you more about the preview version and new capabilities.
+* [REST API (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument): This API shows you more about the preview version and new capabilities.
### Try signature detection
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--|
-|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-01-30-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)|
+|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)|
| [v2.1 quickstart](quickstarts/get-started-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Title: Form Recognizer general document model | Preview
-description: Concepts encompassing data extraction and analysis using prebuilt general document preview model
+description: Concepts related to data extraction and analysis using prebuilt general document preview model
Previously updated : 03/08/2022 Last updated : 06/06/2022 recommendations: false
The General document preview model combines powerful Optical Character Recogniti
The general document API supports most form types and will analyze your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels. > [!NOTE]
-> The ```2022-01-30-preview``` update to the general document model adds support for selection marks.
+> The ```2022-06-30``` update to the general document model adds support for selection marks.
## General document features
-* The general document model is a pre-trained model, doesn't require labels or training.
+* The general document model is a pre-trained model; it doesn't require labels or training.
-* A single API extracts key-value pairs, selection marks entities, text, tables, and structure from documents.
+* A single API extracts key-value pairs, selection marks, entities, text, tables, and structure from documents.
* The general document model supports structured, semi-structured, and unstructured documents. * Key names are spans of text within the document that are associated with a value. - * Selection marks are identified as fields with a value of ```:selected:``` or ```:unselected:``` ***Sample document processed in the Form Recognizer Studio***
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | |-|-|
-|🆕 **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
+|🆕 **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
### Try Form Recognizer
-See how data is extracted from forms and documents using the Form Recognizer Studio or our Sample Labeling tool.
+Try extracting data from forms and documents using the Form Recognizer Studio.
You'll need the following resources:
You'll need the following resources:
## Key-value pairs
-Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field or in an unstructured document they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document and if you have documents where same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on what the document contained.
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. If you have documents where the same value is described in different ways, for example, customer and user, the associated key will be either customer or user based on context.
## Entities Natural language processing models can identify parts of speech and classify each token or word. The named entity recognition model is able to identify entities like people, locations, and dates to provide for a richer experience. Identifying entities enables you to distinguish between customer types, for example, an individual or an organization.
-The key value pair extraction model and entity identification model are run in parallel on the entire document and not just on the values of the extracted key-value pairs. This process ensures that complex structures where a key can't be identified is still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
+The key-value pair extraction model and entity identification model are run in parallel on the entire documentΓÇönot just on the values of the extracted key-value pairs. This process ensures that complex structures where a key can't be identified are still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
* The general document is a pre-trained model and can be directly invoked via the REST API.
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Title: Form Recognizer ID document model
-description: Concepts encompassing data extraction and analysis using the prebuilt ID document model
+description: Concepts related to data extraction and analysis using the prebuilt ID document model
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false- <!-- markdownlint-disable MD033 -->
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-idDocument**|
+|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-idDocument**|
### Try Form Recognizer
-See how to extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You'll need an ID document. You can use our [sample ID document](https://raw.git
## Form Recognizer preview v3.0
- The Form Recognizer preview introduces several new features and capabilities:
+ The Form Recognizer preview v3.0 introduces several new features and capabilities:
-* **ID document (v3.0)** model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+* **ID document (v3.0)** prebuilt model supports extraction of endorsement, restriction, and vehicle class codes from US driver's licenses.
+
+* The ID Document **2022-06-30-preview** release supports the following data extraction from US driver's licenses:
+
+ * Date issued
+ * Height
+ * Weight
+ * Eye color
+ * Hair color
+ * Document discriminator security code
### ID document preview field extraction |Name| Type | Description | Standardized output| |:--|:-|:-|:-|
-| 🆕 Endorsements | String | Additional driving privileges granted to a driver such as Motorcycle or School bus. | |
-| 🆕 Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
-| 🆕VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
+| 🆕 DateOfIssue | Date | Issue date | yyyy-mm-dd |
+| 🆕 Height | String | Height of the holder. | |
+| 🆕 Weight | String | Weight of the holder. | |
+| 🆕 EyeColor | String | Eye color of the holder. | |
+| 🆕 HairColor | String | Hair color of the holder. | |
+| 🆕 DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
+| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | |
+| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
+| VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | | | DateOfBirth | Date | DOB | yyyy-mm-dd | | DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Title: Form Recognizer invoice model
-description: Concepts encompassing data extraction and analysis using prebuilt invoice model
+description: Concepts related to data extraction and analysis using prebuilt invoice model
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-invoice**|
+|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-invoice**|
### Try Form Recognizer
You'll need an invoice document. You can use our [sample invoice document](https
|--|:-|:| |Invoice| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US| |Invoice| <ul><li>SpanishΓÇöes</li></ul>| Spanish (United States)ΓÇöes|
+|Invoice (preview)| <ul><li>GermanΓÇöde</li></ul>| German (Germany)-de|
+|Invoice (preview)| <ul><li>FrenchΓÇöfr</li></ul>| French (France)ΓÇöfr|
+|Invoice (preview)| <ul><li>ItalianΓÇöit</li></ul>| Italian (Italy)ΓÇöit|
+|Invoice (preview)| <ul><li>PortugueseΓÇöpt</li></ul>| Portuguese (Portugal)ΓÇöpt|
+|Invoice (preview)| <ul><li>DutchΓÇönl</li></ul>| Dutch (Netherlands)ΓÇönl|
## Field extraction
Following are the line items extracted from an invoice in the JSON output respon
| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | | | Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 | | Tax | Number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
-| VAT | Number | Stands for Value added tax. This is a flat tax levied on an item. Common in European countries | &euro;20.00 | |
+| VAT | Number | Stands for Value added tax. VAT is a flat tax levied on an item. Common in European countries | &euro;20.00 | |
-The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+
+### Key-value pairs (Preview)
+
+The prebuilt invoice **2022-06-30-preview** release returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. If you have documents where the same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on context.
## Form Recognizer preview v3.0
The invoice key-value pairs and line items extracted are in the `documentResults
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
The invoice key-value pairs and line items extracted are in the `documentResults
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0 (Preview)](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
-
+ > [Form Recognizer API v3.0 (Preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [!div class="nextstepaction"] > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false-+ # Form Recognizer layout model
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
-**Data extraction features**
+## Supported document types
-| **Layout model** | **Text Extraction** | **Selection Marks** | **Tables** |
+| **Model** | **Images** | **PDF** | **TIFF** |
| | | | | | Layout | Γ£ô | Γ£ô | Γ£ô |
+### Data extraction
+
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** |
+| | | | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+**Supported paragraph roles**:
+
+* title
+* sectionHeading
+* footnote
+* pageHeader
+* pageFooter
+* pageNumber
+
+For a richer semantic analysis, paragraph roles are best used with unstructured documents to better understand the layout of the extracted content.
+ ## Development options The following tools are supported by Form Recognizer v2.1:
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-layout**|
+|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-layout**|
-### Try Form Recognizer
+## Try Form Recognizer
-See how data is extracted from forms and documents using the Form Recognizer Studio or Sample Labeling tool. You'll need the following resources:
+Try extracting data from forms and documents using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data is extracted from forms and documents using the Form Recognizer Stu
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+### Form Recognizer Studio (preview)
> [!NOTE] > Form Recognizer studio is available with the preview (v3.0) API. ***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)*** 1. On the Form Recognizer Studio home page, select **Layout**
See how data is extracted from forms and documents using the Form Recognizer Stu
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
-#### Sample Labeling tool
-
-You'll need a form document. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf).
-
-1. On the Sample Labeling tool home page, select **Use Layout to get text, tables, and selection marks**.
-
-1. Select **Local file** from the dropdown menu.
-
-1. Upload your file and select **Run Layout**
-
- :::image type="content" source="media/try-layout.png" alt-text="Screenshot: Screenshot: Sample Labeling tool dropdown layout file source selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned).
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier).
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 X 768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-## Data extraction
+## Model extraction
-The layout model extracts table structures, selection marks, typeface and handwritten text, and bounding box coordinates from your documents.
-
-### Tables and table headers
+The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+### Text lines and words
+Layout API extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines, if detected, along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
### Selection marks
-Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
-
+Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
-### Text lines and words
-
-The layout model extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Typeface and handwritten text is extracted from lines and words. The service then returns bounding box coordinates, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+### Tables and table headers
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
-### Natural reading order for text lines (Latin only)
+### Paragraphs
-In Form Recognizer v2.1, you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
+The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
-In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there's no `readingOrder` parameter provided in this version.
+### Paragraph roles
-### Handwritten classification for text lines (Latin only)
+The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages.
+| **Predicted role** | **Description** |
+| | |
+| `title` | The main heading(s) in the page |
+| `sectionHeading` | One or more subheading(s) on the page |
+| `footnote` | Text near the bottom of the page |
+| `pageHeader` | Text near the top edge of the page |
+| `pageFooter` | Text near the bottom edge of the page |
+| `pageNumber` | Page number |
### Select page numbers or ranges for text extraction
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
-
-## Form Recognizer preview v3.0
-
- The Form Recognizer preview introduces several new features and capabilities.
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
## Next steps
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Title: Form Recognizer models
-description: Concepts encompassing data extraction and analysis using prebuilt models.
+description: Concepts related to data extraction and analysis using prebuilt models.
Previously updated : 03/16/2022 Last updated : 06/06/2022 recommendations: false
# Form Recognizer models
-Azure Form Recognizer prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. Prebuilt models use optical character recognition (OCR) combined with deep learning models to identify and extract predefined text and data fields common to specific form and document types. Form Recognizer extracts analyzes form and document data then returns an organized, structured JSON response. Form Recognizer v2.1 supports invoice, receipt, ID document, and business card models.
+ Azure Form Recognizer supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Form Recognizer can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
## Model overview
The W-2 model analyzes and extracts key information reported in each box on a W-
[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
-The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from forms and documents.
+The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***: > [!div class="nextstepaction"]
+>
> [Learn more: layout model](concept-layout.md) ### Invoice [:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports both English and Spanish invoices.
+The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
***Sample invoice processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-The receipt model analyzes and extracts key information from printed and handwritten receipts.
+* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
+
+* The preview version v3.0 also supports single-page hotel receipt processing.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The business card model analyzes and extracts key information from business card
[:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
-The custom model analyzes and extracts data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+
+* The preview version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
The custom model analyzes and extracts data from forms and documents specific to
A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
-***Composed model dialog window[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
+***Composed model dialog window in [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
:::image type="content" source="media/studio/composed-model.png" alt-text="Screenshot of Form Recognizer Studio compose custom model dialog window.":::
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
- | **Data extraction** | **Text extraction** |**Key-Value pairs** |**Fields**|**Selection Marks** | **Tables** |**Entities** |
-| |:: |::|:: |:: |:: |:: |
-|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | || | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | ✓ | ✓ | ✓ ||
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ || ✓ | ✓ | ✓ |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | || Γ£ô | Γ£ô | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô ||
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
-| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
-| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô| | ||
-| [Custom](concept-custom.md#compare-model-features) |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
+ | **Model ID** | **Text extraction** | **Selection Marks** | **Tables** | **Paragraphs** | **Key-Value pairs** | **Fields** |**Entities** |
+ |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | | | ✓ | | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | | ✓ | | ✓ | |
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
## Input requirements
A composed model is created by taking a collection of custom models and assignin
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
-## Form Recognizer preview v3.0
-
- Form Recognizer v3.0 (preview) introduces several new features and capabilities:
-
-* [**Read (preview)**](concept-read.md) model is a new API that extracts text lines, words, their locations, detected languages, and handwritten text, if detected.
-* [**General document (preview)**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
-* [**Receipt (preview)**](concept-receipt.md) model supports single-page hotel receipt processing.
-* [**ID document (preview)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**W-2 (preview)**](concept-w2.md) model supports employee, employer, wage information, etc. from US W-2 forms.
-* [**Custom model API (preview)**](concept-custom.md) supports signature detection for custom forms.
- ### Version migration Learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md)
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Title: Read - Form Recognizer
+ Title: Read OCR - Form Recognizer
-description: Learn concepts related to Read API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Learn concepts related to Read OCR API analysis with Form Recognizer APIΓÇöusage and limits.
Previously updated : 03/09/2022 Last updated : 06/06/2022 recommendations: false
-# Form Recognizer read model
+# Form Recognizer Read OCR model
-Form Recognizer v3.0 preview includes the new Read API model. The read model extracts typeface and handwritten text including mixed languages in documents. The read model can detect lines, words, locations, and languages and is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
+Form Recognizer v3.0 preview includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+
+## Supported document types
+
+| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** |
+| | | | | | | | |
+| Read | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+### Data extraction
+
+| **Read model** | **Text** | **[Language detection](language-support.md#detected-languages-read-api)** |
+| | | |
+prebuilt-read | Γ£ô |Γ£ô |
## Development options
The following resources are supported by Form Recognizer v3.0:
|-||| |**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
-## Data extraction
-
-| **Read model** | **Text Extraction** | **[Language detection](language-support.md#detected-languages-read-api)** |
-| | | |
-prebuilt-read | Γ£ô |Γ£ô |
-
-### Try Form Recognizer
+## Try Form Recognizer
-See how text is extracted from forms and documents using the Form Recognizer Studio. You'll need the following assets:
+Try extracting text from forms and documents using the Form Recognizer Studio. You'll need the following assets:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how text is extracted from forms and documents using the Form Recognizer Stu
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+### Form Recognizer Studio (preview)
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the preview (v3.0) API. The latest service preview is currently not enabled for analyzing Microsoft Word, Excel, PowerPoint, and HTML file formats using the Form Recognizer Studio.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
See how text is extracted from forms and documents using the Form Recognizer Stu
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, Microsoft Word, Excel, PowerPoint, and HTML files are supported with the Read API in **2022-06-30-preview**.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier)
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
+* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales Form Recognizer preview version supports several languages for the read model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-## Features
+## Data detection and extraction
-### Text lines and words
+### Pages
-Read API extracts text from documents and images. It accepts PDFs and images of documents and handles printed and/or handwritten text, and supports mixed languages. Text is extracted as text lines, words, bounding boxes, confidence scores, and style, whether handwritten or not, supported for Latin languages only.
+With the added support for Microsoft Word, Excel, PowerPoint, and HTML files, the page units in the model output are computed as shown:
-### Language detection
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Images | Each image = 1 page unit | Total images |
+|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
+|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+### Text lines and words
-Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the language at the text line level along with the confidence score.
+Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-### Handwritten classification for text lines (Latin only)
+For Microsoft Word, Excel, PowerPoint, and HTML file formats, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages.
+### Language detection
+
+Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict all detected languages for text lines along with the `confidence` in the `languages` collection under `analyzeResult`.
### Select page (s) for text extraction
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+
+> [!NOTE]
+> For Microsoft Word, Excel, PowerPoint, and HTML file formats, the Read API ignores the pages parameter and extracts all pages by default.
## Next steps
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Title: Form Recognizer receipt model
-description: Concepts encompassing data extraction and analysis using the prebuilt receipt model
+description: Concepts related to data extraction and analysis using the prebuilt receipt model
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false
# Form Recognizer receipt model
-The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns a structured JSON data representation.
+The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, total tax, and transaction total and returns a structured JSON data representation.
***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li></ul>|**prebuilt-receipt**|
+|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li></ul>|**prebuilt-receipt**|
### Try Form Recognizer
See how data, including time and date of transactions, merchant information, and
#### Sample Labeling tool (API v2.1)
-You will need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
+You'll need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
You will need a receipt document. You can use our [sample receipt document](http
* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
You will need a receipt document. You can use our [sample receipt document](http
| TransactionTime | Time | Time the receipt was issued | hh-mm-ss (24-hour) | | Total | Number (USD)| Full transaction total of receipt | Two-decimal float| | Subtotal | Number (USD) | Subtotal of receipt, often before taxes are applied | Two-decimal float|
-| Tax | Number (USD) | Tax on receipt (often sales tax or equivalent) | Two-decimal float |
+ | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30-preview version**. | Two-decimal float |
| Tip | Number (USD) | Tip included by buyer | Two-decimal float| | Items | Array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
-| Name | String | Item name | |
-| Quantity | Number | Quantity of each item | Integer |
+| Name | String | Item description. **Renamed to "Description" in 2022-06-30-preview version**. | |
+| Quantity | Number | Quantity of each item | Two-decimal float |
| Price | Number | Individual price of each item unit| Two-decimal float |
-| Total Price | Number | Total price of line item | Two-decimal float |
+| TotalPrice | Number | Total price of line item | Two-decimal float |
## Form Recognizer preview v3.0
You will need a receipt document. You can use our [sample receipt document](http
| Items.*.Category | String | Item category, for example, Room, Tax, etc. | | | Items.*.Date | Date | Item date | yyyy-mm-dd | | Items.*.Description | String | Item description | |
-| Items.*.TotalPrice | Number | Item total price | Integer |
+| Items.*.TotalPrice | Number | Item total price | Two-decimal float |
| Locale | String | Locale of the receipt, for example, en-US. | ISO language-county code | | MerchantAddress | String | Listed address of merchant | | | MerchantAliases | Array| | |
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Title: Form Recognizer W-2 form prebuilt model
+ Title: Form Recognizer W-2 prebuilt model
-description: Data extraction and analysis extraction using the prebuilt-tax Form W-2 model
+description: Data extraction and analysis extraction using the prebuilt W-2 model
Previously updated : 03/25/2022 Last updated : 06/06/2022 recommendations: false
A W-2 is a multipart form divided into state and federal sections and consisting
## Development options
-The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the following tools:
+The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following tools:
| Feature | Resources | Model ID | |-|-|--|
The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the follo
### Try Form Recognizer
-See how data is extracted from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
+Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
> [!NOTE] > Form Recognizer studio is available with v3.0 preview API.
-1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2 form**.
+1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2**.
1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|prebuilt-tax.us.w2| <ul>English (United States)</ul></br>|English (United States)ΓÇöen-US|
+|prebuilt-tax.us.w2|<ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
## Field extraction
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
| TaxYear | | Number | Tax year | 2020 | | W2FormVariant | | String | The variants of W-2 forms, including "W-2", "W-2AS", "W-2CM", "W-2GU", "W-2VI" | W-2 | - ### Migration guide and REST API v3.0 * Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
## Next steps * Complete a Form Recognizer quickstart:-
-|Programming language | :::image type="content" source="media/form-recognizer-icon.png" alt-text="Form Recognizer icon from the Azure portal."::: |Programming language
-|::|::|::|
-|[**C#**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)||[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)|
-|[**Java**](quickstarts/try-v3-java-sdk.md#prebuilt-model)||[**Python**](quickstarts/try-v3-python-sdk.md#prebuilt-model)|
-|[**REST API**](quickstarts/try-v3-rest-api.md)|||
+> [!div class="checklist"]
+>
+> * [**REST API**](quickstarts/try-v3-rest-api.md)
+> * [**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)
+> * [**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)
+> * [**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)
+> * [**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Previously updated : 03/25/2022 Last updated : 06/06/2022 # Configure Form Recognizer containers
> > Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. For more information, See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container).
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
Each container has the following configuration settings:
|Required|Setting|Purpose| |--|--|--| |Yes|[Key](#key-and-billing-configuration-setting)|Tracks billing information.|
-|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. _See_ [Billing]](form-recognizer-container-install-run.md#billing), for more information. For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
+|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information, _see_ [Billing](form-recognizer-container-install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) customer content support to your container.|
|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.| |No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.| |No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Previously updated : 01/06/2022 Last updated : 06/06/2022 recommendations: false #Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal.
Let's get started:
1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
-1. If your overview page does not have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
+1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
That's it! You're now ready to start automating data extraction using Azure Form
* Try the [Form Recognizer Studio](concept-form-recognizer-studio.md), an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications.
-* Complete a Form Recognizer [C#](quickstarts/try-v3-csharp-sdk.md),[Python](quickstarts/try-v3-python-sdk.md), [Java](quickstarts/try-v3-java-sdk.md), or [JavaScript](quickstarts/try-v3-javascript-sdk.md) quickstart and get started creating a document processing app in the development language of your choice.
+* Complete a Form Recognizer quickstart and get started creating a document processing app in the development language of your choice:
+
+ * [C#](quickstarts/try-v3-csharp-sdk.md)
+ * [Python](quickstarts/try-v3-python-sdk.md)
+ * [Java](quickstarts/try-v3-java-sdk.md)
+ * [JavaScript](quickstarts/try-v3-javascript-sdk.md)
applied-ai-services Use Prebuilt Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-prebuilt-read.md
recommendations: false
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-01-30-preview```.
+The current API version is ```2022-06-30```.
::: zone pivot="programming-language-csharp"
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
Previously updated : 04/22/2022 Last updated : 06/06/2022
Pre-Built Receipt and Business Cards support all English receipts and business c
|English (India|`en-in`| |English (United States)| `en-us`|
+## Business card model
+
+The **2022-06-30-preview** release includes Japanese language support:
+
+|Language| Locale code |
+|:--|:-:|
+| Japanese | `ja` |
+ ## Invoice model Language| Locale code | |:--|:-:|
-|English (United States)|en-us|
-|Spanish (preview) | es |
+|English (United States) |en-US|
+|Spanish| es|
+|German (**2022-06-30-preview**)| de|
+|French (**2022-06-30-preview**)| fr|
+|Italian (**2022-06-30-preview**)|it|
+|Portuguese (**2022-06-30-preview**)|pt|
+|Dutch (**2022-06-30-preview**)| nl|
## ID documents
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 03/08/2022 Last updated : 06/06/2022 recommendations: false keywords: automated data processing, document processing, automated data entry, forms processing #Customer intent: As a developer of form-processing software, I want to learn what the Form Recognizer service does so I can determine if I should use it.- <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 -->
Form Recognizer uses the following models to easily identify, extract, and analy
**Document analysis models**
-* [**Read model**](concept-read.md) | Extract typeface and handwritten text lines, words, locations, and detected languages from documents and images.
-* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents (PDF and TIFF) and images (JPG, PNG, and BMP).
+* [**Read model**](concept-read.md) | Extract text lines, words, locations, and detected languages from documents and images.
+* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents and images.
* [**General document model**](concept-general-document.md) | Extract key-value pairs, selection marks, and entities from documents. **Prebuilt models**
This section helps you decide which Form Recognizer v3.0 supported feature you s
| What type of document do you want to analyze?| How is the document formatted? | Your best solution | | --|-| -| |<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**Text-only document**</li></yl>| Is your text-only document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) or, if handwritten, is it composed in English?|<ul><li>If **Yes**, use the [**Read**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
-|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in English or Spanish text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
+|<ul><li>**Primarily text content**</li></yl>| Is your document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) and are you only interested in text and not tables, selection marks, and the structure?|<ul><li>If **Yes** to text-only extraction, use the [**Read**](concept-read.md) model.<li>If **No**, because you also need structure information, use the [**Layout**](concept-layout.md) model.</li></ul>
+|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document (preview)**](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul>
+|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>| |<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>| |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
The following features and development options are supported by the Form Recogn
|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>| |[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
This documentation contains the following article types:
> [!div class="checklist"] > > * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument) to learn more.
+> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more.
> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. ### [Form Recognizer v2.1](#tab/v2-1)
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 03/16/2022 Last updated : 06/06/2022 recommendations: false
[Reference documentation](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/@azure/ai-form-recognizer_4.0.0-beta.3/sdk/formrecognizer/ai-form-recognizer/) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-bet)
-Get started with Azure Form Recognizer using the JavaScript programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the JavaScript programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page. In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
-* [🆕 **General document**](#general-document-model)—Analyze and extract common fields from specific document types using a pre-trained invoice model.
+* [🆕 **General document**](#general-document-model)—Analyze and extract key-value pairs, selection marks, and entities from documents.
* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
+* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained invoice model.
## Prerequisites
Extract text, tables, structure, key-value pairs, and named entities from docume
const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
Extract text, selection marks, text styles, table structures, and bounding regio
const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
// using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output const { PrebuiltModels } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf";
In this quickstart, you used the Form Recognizer JavaScript SDK to analyze vario
## Next steps > [!div class="nextstepaction"]
-> [REST API v3.0reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+> [REST API v3.0reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
> [!div class="nextstepaction"] > [Form Recognizer JavaScript reference library](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.1/https://docsupdatetracker.net/index.html)
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 03/24/2022 Last updated : 06/06/2022
-# Get started: Form Recognizer REST API 2022-01-30-preview
+# Get started: Form Recognizer REST API 2022-06-30-preview
<!-- markdownlint-disable MD036 --> >[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-01-30-preview```.
+The current API version is **2022-06-30-preview**.
-| [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
+| [Form Recognizer REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our
**Custom Models** * CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types.
-* Composed customΓÇöCompose a collection of custom models and assign them to a single model built from your form types.
+* Composed customΓÇöCompose a collection of custom models and assign them to a single model ID.
## Prerequisites
Before you run the cURL command, make the following changes:
#### POST request ```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Reference table
You'll receive a `202 (Success)` response that includes an **Operation-Location*
### Get analyze results (GET Request)
-After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal. 1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-1. Replace `{modelID}` with the same model name you used to analyze your document.
+1. Replace `{modelID}` with the same modelID you used to analyze your document.
1. Replace `{resultID}` with the result ID from the [Operation-Location](#operation-location) header. <!-- markdownlint-disable MD024 --> #### GET request ```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+<<<<<<< HEAD
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30" -H "Ocp-Apim-Subscription-Key: {key}"
+=======
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+>>>>>>> resolve-merge-conflict
``` #### Examine the response
You'll receive a `200 (Success)` response with JSON output. The first field, `"s
"createdDateTime": "2022-03-25T19:31:37Z", "lastUpdatedDateTime": "2022-03-25T19:31:43Z", "analyzeResult": {
- "apiVersion": "2022-01-30-preview",
+ "apiVersion": "2022-06-30",
"modelId": "prebuilt-invoice", "stringIndexType": "textElements"... ..."pages": [
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Previously updated : 05/23/2022 Last updated : 06/06/2022
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [For
| Adjustable | No | No | | **Max size of OCR json response** | 500 MB | 500 MB | | Adjustable | No | No |
+| **Max number of Template models** | 500 | 5000 |
+| Adjustable | No | No |
+| **Max number of Neural models** | 100 | 500 |
+| Adjustable | No | No |
# [Form Recognizer v3.0 (Preview)](#tab/v30)
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Form Recognizer v3.0 (preview) introduces several new features and capabilities:
* [**Custom document model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents. * [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom forms.
+* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
+* [**Custom model API (v3.0)**](overview.md) supports analysis of all the newly added prebuilt models. For a complete list of prebuilt models, see the [overview](overview.md) page.
In this article, you'll learn the differences between Form Recognizer v2.1 and v3.0 and how to move to the newer version of the API.
+> [!CAUTION]
+>
+> * REST API **2022-06-30-preview** release includes a breaking change in the REST API analyze response JSON.
+> * The `boundingBox` property is renamed to `polygon` in each instance.
+ ## Changes to the REST API endpoints The v3.0 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (prebuilt-layout) and prebuilt models.
In this article, you'll learn the differences between Form Recognizer v2.1 and v
### POST request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
``` ### GET request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-01-30-preview
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-06-30
``` ### Analyze operation
https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/
| Model | v2.1 | v3.0 | |:--| :--| :--| | **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
-|🆕 **General document**|N/A|/documentModels/prebuilt-document:analyze |
-| **Layout**| /layout/analyze |/documentModels/prebuilt-layout:analyze|
-|**Custom**| /custom/{modelId}/analyze |/documentModels/{modelId}:analyze |
-| **Invoice** | /prebuilt/invoice/analyze | /documentModels/prebuilt-invoice:analyze |
-| **Receipt** | /prebuilt/receipt/analyze | /documentModels/prebuilt-receipt:analyze |
-| **ID document** | /prebuilt/idDocument/analyze | /documentModels/prebuilt-idDocument:analyze |
-|**Business card**| /prebuilt/businessCard/analyze| /documentModels/prebuilt-businessCard:analyze|
-|**W-2**| /prebuilt/w-2/analyze| /documentModels/prebuilt-w-2:analyze|
+|🆕 **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
+| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`|
+|**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` |
+| **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
+| **Receipt** | /prebuilt/receipt/analyze | `/documentModels/prebuilt-receipt:analyze` |
+| **ID document** | /prebuilt/idDocument/analyze | `/documentModels/prebuilt-idDocument:analyze` |
+|**Business card**| /prebuilt/businessCard/analyze| `/documentModels/prebuilt-businessCard:analyze`|
+|**W-2**| /prebuilt/w-2/analyze| `/documentModels/prebuilt-w-2:analyze`|
### Analyze request body
Base64 encoding is also supported in Form Recognizer v3.0:
Parameters that continue to be supported:
-* `pages`
-* `locale`
+* `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
+* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. "en", "fr") or BCP 47 language tag (ex. "en-US").
-Parameters no longer supported:
+Parameters no longer supported:
* includeTextDetails
Analyze response has been refactored to the following top-level results to suppo
{ // Basic analyze result metadata
-"apiVersion": "2022-01-30-preview", // REST API version used
+"apiVersion": "2022-06-30", // REST API version used
"modelId": "prebuilt-invoice", // ModelId used "stringIndexType": "textElements", // Character unit used for string offsets and lengths: // textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
Analyze response has been refactored to the following top-level results to suppo
"angle": 0, // Orientation of content in clockwise direction (degree) "width": 0, // Page width "height": 0, // Page height
-"unit": "pixel", // Unit for width, height, and bounding box coordinates
+"unit": "pixel", // Unit for width, height, and polygon coordinates
"spans": [ // Parts of top-level content covered by page { "offset": 0, // Offset in content
Analyze response has been refactored to the following top-level results to suppo
{ "rowCount": 1, // Number of rows in table "columnCount": 1, // Number of columns in table
-"boundingRegions": [ // Bounding boxes potentially across pages covered by table
+"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table
{ "pageNumber": 1, // 1-indexed page number
-"boundingBox": [ ... ], // Bounding box
+"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-06-30-preview API
} ], "spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
Analyze response has been refactored to the following top-level results to suppo
] } -- ``` ## Build or train model
The model object has three updates in the new API
* ```modelId``` is now a property that can be set on a model for a human readable name. * ```modelName``` has been renamed to ```description```
-* ```buildMode``` is a new proerty with values of ```template``` for custom form models or ```neural``` for custom document models.
+* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom document models.
-The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL is not the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed``` and the error is returned.
+The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed``` and the error is returned.
The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": {modelId},
POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build
Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-01-30-preview
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-06-30
{ "modelId": "{composedModelId}", "description": "{composedModelDescription}",
The only changes to the copy model function are:
***Authorize the copy*** ```json
-POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-01-30-preview
+POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-06-30
{ "modelId": "{targetModelId}", "description": "{targetModelDescription}",
POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-versio
Use the response body from the authorize action to construct the request for the copy. ```json
-POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-01-30-preview
+POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-06-30
{ "targetResourceId": "{targetResourceId}", "targetResourceRegion": "{targetResourceRegion}",
List models have been extended to now return prebuilt and custom models. All pre
***Sample list models request*** ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-06-30
``` ## Change to get model
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-ve
As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type is described by its name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
``` ## New get info operation
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{model
The ```info``` operation on the service returns the custom model count and custom model limit. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-06-30
``` ***Sample response***
GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=202
In this migration guide, you've learned how to upgrade your existing Form Recognizer application to use the v3.0 APIs. Continue to use the 2.1 API for all GA features and use the 3.0 API for any of the preview features.
-* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
* [What is Form Recognizer?](overview.md) * [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 02/28/2022 Last updated : 06/06/2022 - <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+## June 2022
+
+### Form Recognizer v3.0 preview release (beta.3)
+
+The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities. There are considerable updates across the feature APIs:
+
+* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
+* [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [🆕 **Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [🆕 **Invoice model output now includes general document key-value pairs**](concept-custom-template.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs-preview).
+* [🆕 **Invoice language expansion**](concept-custom-template.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+* [🆕 **Prebuilt business card**](concept-business-card.md). The business card model now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+* [🆕 **Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
+ ## February 2022 ### Form Recognizer v3.0 preview release
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
-* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten support for the same features expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish languages.
+* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API. #### Form Recognizer model data extraction
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** |
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** |**Signatures**|
| | :: |::| :: | :: |:: |
- |🆕Read | ✓ | | | | |
- |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ |
- | Layout | Γ£ô | | Γ£ô | Γ£ô | |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
- |Receipt | Γ£ô | Γ£ô | | ||
- | ID document | Γ£ô | Γ£ô | | ||
- | Business card | Γ£ô | Γ£ô | | ||
- | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ |🆕Read | ✓ | | | | | |
+ |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ | |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô | | |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô || |
+ |Receipt | Γ£ô | Γ£ô | | || |
+ | ID document | Γ£ô | Γ£ô | | || |
+ | Business card | Γ£ô | Γ£ô | | || |
+ | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+ | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
#### Form Recognizer SDK beta preview release
pip package version 3.1.0b4
**Form Recognizer v2.1 public preview 3 is now available.** v2.1-preview.3 has been released, including the following features:
-* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses, such as name, date of birth, issue date, expiration date, and more.
+* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses.
[Learn more about the prebuilt ID model](./concept-id-document.md)
pip package version 3.1.0b4
:::image type="content" source="./media/table-labeling.png" alt-text="Table labeling" lightbox="./media/table-labeling.png":::
- In addition to labeling tables, you can now label empty values and regions; if some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
+ In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
* **Support for 66 new languages** - The Layout API and Custom Models for Form Recognizer now support 73 languages.
pip package version 3.1.0b4
![Screenshot: Sample Labeling tool.](./media/ui-preview.jpg) * **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
-* **Auto Label Documents** - Automatically labels additional documents based on previous labeled documents in the project.
+* **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
## August 2020
pip package version 3.1.0b4
* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature. * **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Form Recognizer client objects in the SDKs.
-* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, *see* the SDK changelogs for more information.
+* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, _see_ the SDK changelogs.
* [C# SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) * [Python SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) * [Java SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-jav)
pip package version 3.1.0b4
* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme) * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
- The new SDK supports all the features of the v2.0 REST API for Form Recognizer. For example, you can train a model with or without labels and extract text, key-value pairs and tables from your forms, extract data from receipts with the pre-built receipts service and extract text and tables with the layout service from your documents. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
+ The new SDK supports all the features of the v2.0 REST API for Form Recognizer. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
-* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource by calling the Copy Authorization operation against the target resource endpoint.
+* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
* [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API * [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Follow the next steps to use a managed identity for Azure resources on a Hybrid
Get-AzVM -DefaultProfile $AzureContext | Select Name ```
- If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
+ If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you run the runbook in an Azure sandbox instead of Hybrid Runbook Worker and you want to use a user-assigned managed identity, then:
1. From line 5, remove `$AzureContext = (Connect-AzAccount -Identity).context`, 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and 1. Enter the Client ID.
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managing-data.md
The Automation geo-replication service isn't accessible directly to external cus
## Next steps
+* To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
* To learn more about secure assets in Azure Automation, see [Encryption of secure assets in Azure Automation](automation-secure-asset-encryption.md).- * To find out more about geo-replication, see [Creating and using active geo-replication](/azure/azure-sql/database/active-geo-replication-overview).
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
Title: Manage role permissions and security in Azure Automation
-description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management for Azure resources.
+description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management and role permissions for Azure resources.
Last updated 09/10/2021
#Customer intent: As an administrator, I want to understand permissions so that I use the least necessary set of permissions.
-# Manage role permissions and security in Automation
+# Manage role permissions and security in Azure Automation
Azure role-based access control (Azure RBAC) enables access management for Azure resources. Using [Azure RBAC](../role-based-access-control/overview.md), you can segregate duties within your team and grant only the amount of access to users, groups, and applications that they need to perform their jobs. You can grant role-based access to users using the Azure portal, Azure Command-Line tools, or Azure Management APIs.
When a user assigned to the Automation Operator role on the Runbook scope views
## Next steps
+* To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
* To find out more about Azure RBAC using PowerShell, see [Add or remove Azure role assignments using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md). * For details of the types of runbooks, see [Azure Automation runbook types](automation-runbook-types.md). * To start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
To revoke access to customer-managed keys, use PowerShell or the Azure CLI. For
## Next steps
+- To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
- To understand Azure Key Vault, see [What is Azure Key Vault?](../key-vault/general/overview.md). - To work with certificates, see [Manage certificates in Azure Automation](shared-resources/certificates.md). - To handle credentials, see [Manage credentials in Azure Automation](shared-resources/credentials.md).
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Title: Azure Automation security guidelines, security best practices Automation.
+ Title: Azure Automation security guidelines, security best practices Automation jobs.
description: This article helps you with the guidelines that Azure Automation offers to ensure a secured configuration of Automation account, Hybrid Runbook worker role, authentication certificate and identities, network isolation and policies.
Last updated 02/16/2022
-# Best practices for security in Azure Automation
+# Security best practices in Azure Automation
This article details the best practices to securely execute the automation jobs. [Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Title: Automation services in Azure - overview
-description: This article tells what are the Automation services in Azure and how to use it to automate the lifecycle of infrastructure and applications.
+description: This article tells what are the Automation services in Azure and how to compare and use it to automate the lifecycle of infrastructure and applications.
keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions Last updated 03/04/2022
azure-app-configuration Howto Disable Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-public-access.md
+
+ Title: How to disable public access in Azure App Configuration
+description: How to disable public access to your Azure App Configuration store.
++++ Last updated : 05/25/2022+++
+# Disable public access in Azure App Configuration
+
+In this article, you'll learn how to disable public access for your Azure App Configuration store. Setting up private access can offer a better security for your configuration store.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- We assume you already have an App Configuration store. If you want to create one, [create an App Configuration store](quickstart-aspnet-core-app.md).
+
+## Sign in to Azure
+
+You will need to sign in to Azure first to access the App Configuration service.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+Sign in to Azure using the `az login` command in the [Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli-interactive
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Disable public access to a store
+
+Azure App Configuration offers three public access options:
+
+- Automatic public access: public network access is enabled, as long as you don't have a private endpoint present. Once you create a private endpoint, App Configuration disables public network access and enables private access. This option can only be selected when creating the store.
+- Disabled: public access is disabled and no traffic can access this resource unless it's through a private endpoint.
+- Enabled: all networks can access this resource.
+
+To disable access to the App Configuration store from public network, follow the process below.
+
+### [Portal](#tab/azure-portal)
+
+1. In your App Configuration store, under **Settings**, select **Networking**.
+1. Under **Public Access**, select **Disabled** to disable public access to the App Configuration store and only allow access through private endpoints. If you already had public access disabled and instead wanted to enable public access to your configuration store, you would select **Enabled**.
+
+ > [!NOTE]
+ > Once you've switched **Public Access** to **Disabled** or **Enabled**, you won't be able to select **Public Access: Automatic** anymore, as this option can only be selected when creating the store.
+
+1. Select **Apply**.
++
+### [Azure CLI](#tab/azure-cli)
+
+In the CLI, run the following code:
+
+```azurecli-interactive
+az appconfig update --name <name-of-the-appconfig-store> --enable-public-network false
+```
+
+> [!NOTE]
+> When you create an App Config store without specifying if you want public access to be enabled or disabled, public access is set to automatic by default. After you've run the `--enable-public-network` command, you won't be able to switch to an automatic public access anymore.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+>[Using private endpoints for Azure App Configuration](./concept-private-endpoint.md)
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
NAME STATE
<namespace> Ready ```
-## Create Azure Arc-enabled SQL Managed Instance
+## Create an instance of Azure Arc-enabled SQL Managed Instance
Now, we can create the Azure MI for indirectly connected mode with the following command:
To connect with Azure Data Studio, see [Connect to Azure Arc-enabled SQL Managed
## Next steps
-[Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md).
+[Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md).
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
Title: Delete Azure Arc-enabled SQL Managed Instance
-description: Delete Azure Arc-enabled SQL Managed Instance
+ Title: Delete an Azure Arc-enabled SQL Managed Instance
+description: Learn how to delete an Azure Arc-enabled SQL Managed Instance and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
+
Last updated 07/30/2021
-# Delete Azure Arc-enabled SQL Managed Instance
-This article describes how you can delete an Azure Arc-enabled SQL Managed Instance.
+# Delete an Azure Arc-enabled SQL Managed Instance
+In this how-to guide, you'll find and then delete an Azure Arc-enabled SQL Managed Instance. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs).
-## View Existing Azure Arc-enabled SQL Managed Instances
-To view SQL Managed Instances, run the following command:
+1. Find existing Azure Arc-enabled SQL Managed Instances:
-```azurecli
-az sql mi-arc list --k8s-namespace <namespace> --use-k8s
-```
+ ```azurecli
+ az sql mi-arc list --k8s-namespace <namespace> --use-k8s
+ ```
-Output should look something like this:
+ Example output:
-```console
-Name Replicas ServerEndpoint State
- - - -
-demo-mi 1/1 10.240.0.4:32023 Ready
-```
+ ```console
+ Name Replicas ServerEndpoint State
+ - - -
+ demo-mi 1/1 10.240.0.4:32023 Ready
+ ```
-## Delete Azure Arc-enabled SQL Managed Instance
+1. Delete the SQL Managed Instance, run one of the commands appropriate for your deployment type:
-To delete a SQL Managed Instance, run the appropriate command for your deployment type. For example:
+ 1. **Indirectly connected mode**:
-### [Indirectly connected mode](#tab/indirectly)
+ ```azurecli
+ az sql mi-arc delete --name <instance_name> --k8s-namespace <namespace> --use-k8s
+ ```
-```azurecli
-az sql mi-arc delete -n <instance_name> --k8s-namespace <namespace> --use-k8s
-```
+ Example output:
-Output should look something like this:
+ ```azurecli
+ # az sql mi-arc delete --name demo-mi --k8s-namespace <namespace> --use-k8s
+ Deleted demo-mi from namespace arc
+ ```
-```azurecli
-# az sql mi-arc delete -n demo-mi --k8s-namespace <namespace> --use-k8s
-Deleted demo-mi from namespace arc
-```
+ 1. **Directly connected mode**:
-### [Directly connected mode](#tab/directly)
+ ```azurecli
+ az sql mi-arc delete --name <instance_name> --resource-group <resource_group>
+ ```
-```azurecli
-az sql mi-arc delete -n <instance_name> -g <resource_group>
-```
+ Example output:
-Output should look something like this:
+ ```azurecli
+ # az sql mi-arc delete --name demo-mi --resource-group my-rg
+ Deleted demo-mi from namespace arc
+ ```
-```azurecli
-# az sql mi-arc delete -n demo-mi -g my-rg
-Deleted demo-mi from namespace arc
-```
+## Optional - Reclaim Kubernetes PVCs
-
+A Persistent Volume Claim (PVC) is a request for storage by a user from a Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting PVCs is recommended but it isn't mandatory. However, if you don't reclaim these PVCs, you'll eventually end up with errors in your Kubernetes cluster. For example, you might be unable to create, read, update, or delete resources from the Kubernetes API. You might not be able to run commands like `az arcdata dc export` because the controller pods were evicted from the Kubernetes nodes due to storage issues (normal Kubernetes behavior). You can see messages in the logs similar to:
-## Reclaim the Kubernetes Persistent Volume Claims (PVCs)
+- Annotations: microsoft.com/ignore-pod-health: true
+- Status: Failed
+- Reason: Evicted
+- Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.
-A PersistentVolumeClaim (PVC) is a request for storage by a user from Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting a SQL Managed Instance does not remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). This is by design. The intention is to help the user to access the database files in case the deletion of instance was accidental. Deleting PVCs is not mandatory. However it is recommended. If you don't reclaim these PVCs, you'll eventually end up with errors as your Kubernetes cluster will run out of disk space or usage of the same SQL Managed Instance name while creating new instance might cause inconsistencies. To reclaim the PVCs, take the following steps:
+By design, deleting a SQL Managed Instance doesn't remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). The intention is to ensure that you can access the database files in case the deletion was accidental.
-### 1. List the PVCs for the server group you deleted
+1. To reclaim the PVCs, take the following steps:
+ 1. Find the PVCs for the server group you deleted.
-To list the PVCs, run the following command:
-```console
-kubectl get pvc
-```
+ ```console
+ kubectl get pvc
+ ```
-In the example below, notice the PVCs for the SQL Managed Instances you deleted.
+ In the example below, notice the PVCs for the SQL Managed Instances you deleted.
-```console
-# kubectl get pvc -n arc
+ ```console
+ # kubectl get pvc -n arc
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h
-logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h
-```
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h
+ logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h
+ ```
-### 2. Delete each of the PVCs
-Delete the data and log PVCs for each of the SQL Managed Instances you deleted.
-The general format of this command is:
-```console
-kubectl delete pvc <name of pvc>
-```
+ 1. Delete the data and log PVCs for each of the SQL Managed Instances you deleted.
+ The general format of this command is:
-For example:
-```console
-kubectl delete pvc data-demo-mi-0 -n arc
-kubectl delete pvc logs-demo-mi-0 -n arc
-```
+ ```console
+ kubectl delete pvc <name of pvc>
+ ```
-Each of these kubectl commands will confirm the successful deleting of the PVC. For example:
-```console
-persistentvolumeclaim "data-demo-mi-0" deleted
-persistentvolumeclaim "logs-demo-mi-0" deleted
-```
-
+ For example:
-> [!NOTE]
-> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior).
->
-> For example, you may see messages in the logs similar to:
-> - Annotations: microsoft.com/ignore-pod-health: true
-> - Status: Failed
-> - Reason: Evicted
-> - Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.
+ ```console
+ kubectl delete pvc data-demo-mi-0 -n arc
+ kubectl delete pvc logs-demo-mi-0 -n arc
+ ```
+ Each of these kubectl commands will confirm the successful deleting of the PVC. For example:
+
+ ```console
+ persistentvolumeclaim "data-demo-mi-0" deleted
+ persistentvolumeclaim "logs-demo-mi-0" deleted
+ ```
+
## Next steps Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s --t
Example: ```azurecli
-az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier bc --replicas 3
+az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier BusinessCritical --replicas 3
``` Directly connected mode:
az sql mi-arc create --name <name> --resource-group <group> --location <Azure l
``` Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier bc --replcias 3
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier BusinessCritical --replcias 3
``` By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance will be synchronously replicated to each of the secondary instances.
azure-arc Reference Az Arcdata Dc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc.md
Increase logging verbosity. Use `--debug` for full debug logs.
## az arcdata dc export Export metrics, logs or usage to a file. ```azurecli
-az arcdata dc export
+az arcdata dc export -t logs --path logs.json --k8s-namespace namespace --use-k8s
``` ### Global Arguments #### `--debug`
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
You can install the Azure Key Vault Secrets Provider extension on your connected
### Azure portal
-1. In the [Azure portal](https://portal/azure.com), navigate to **Kubernetes - Azure Arc** and select your cluster.
+1. In the [Azure portal](https://ms.portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster.
1. Select **Extensions** (under **Settings**), and then select **+ Add**. [![Screenshot showing the Extensions page for an Arc-enabled Kubernetes cluster in the Azure portal.](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg)](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg#lightbox)
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
The application repository contains a `.pipeline` folder with the pipelines you'
| Pipeline file name | Description | | - | - |
-| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
-| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
-| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** |
+| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
+| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
+| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** |
### Connect Azure Container Registry to Azure DevOps During the CI process, you'll deploy your application containers to a registry. Start by creating an Azure service connection:
A successful CI pipeline run triggers the CD pipeline to complete the deployment
* View the Azure Vote app in your browser at `http://localhost:8080/` and verify the voting choices have changed to Tabs vs Spaces. 1. Repeat steps 1-7 for the `stage` environment.
-Your deployment is now complete. This ends the CI/CD workflow. Refer to the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/docs/azdo-gitops.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD pipelines used in this tutorial.
+Your deployment is now complete. This ends the CI/CD workflow. Refer to the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD pipelines used in this tutorial.
## Implement CI/CD with GitHub
The CD Stage workflow:
Once the manifests PR to the Stage environment is merged and Flux successfully applied all the changes, it updates Git commit status in the GitOps repository.
-Your deployment is now complete. This ends the CI/CD workflow. Refer to the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/docs/azdo-gitops-githubfluxv2.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD workflows used in this tutorial.
+Your deployment is now complete. This ends the CI/CD workflow. Refer to the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops-githubfluxv2.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD workflows used in this tutorial.
## Clean up resources
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters."
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
+keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
Previously updated : 05/24/2022 Last updated : 06/06/2022
Here's an example for including the [Flux image-reflector and image-automation c
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
+### Red Hat OpenShift onboarding guidance
+Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster prior to onboarding of the `microsoft.flux` extension.
+
+```console
+NS="flux-system"
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:kustomize-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:helm-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:source-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:notification-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-automation-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-reflector-controller
+```
+
+For more information on OpenShift guidance for onboarding Flux, refer to the [Flux documentation](https://fluxcd.io/docs/use-cases/openshift/#openshift-setup).
+ ## Work with parameters For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation.
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 05/20/2022 Last updated : 06/06/2022
Use the Azure portal to create a script that automates the agent download and in
1. On the **Servers - Azure Arc** page, select **Add** near the upper left.-->
-1. [Go to the Azure portal page for adding servers with Azure Arc](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
+1. [Go to the Azure portal page for adding servers with Azure Arc](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
:::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png"::: > [!NOTE]
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-version.md
Previously updated : 10/07/2021 Last updated : 06/03/2022+ # Set Redis version for Azure Cache for Redis
-In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported.
+
+In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported.
> [!NOTE] > At this time, Redis 6 does not support ACL, and geo-replication between a Redis 4 and 6 cache. > ## Prerequisites+ * Azure subscription - [create one for free](https://azure.microsoft.com/free/) ## Create a cache using the Azure portal+ To create a cache, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
To create a cache, follow these steps:
1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**. :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis.":::
-
+ 1. On the **Basics** page, configure the settings for your new cache.
-
+ | Setting | Suggested value | Description | | | - | -- | | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
To create a cache, follow these steps:
| **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. | | **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Select a [cache tier and size](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
+ 1. On the **Advanced** page, choose a Redis version to use.
-
+ :::image type="content" source="media/cache-how-to-version/select-redis-version.png" alt-text="Redis version.":::
-1. Select **Create**.
-
- It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+1. Select **Create**.
+ It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
## Create a cache using Azure PowerShell ```azurepowershell New-AzRedisCache -ResourceGroupName "ResourceGroupName" -Name "CacheName" -Location "West US 2" -Size 250MB -Sku "Standard" -RedisVersion "6" ```+ For more information on how to manage Azure Cache for Redis with Azure PowerShell, see [here](cache-how-to-manage-redis-cache-powershell.md) ## Create a cache using Azure CLI
az redis create --resource-group resourceGroupName --name cacheName --location w
For more information on how to manage Azure Cache for Redis with Azure CLI, see [here](cli-samples.md) ## Upgrade an existing Redis 4 cache to Redis 6
-Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Please note that upgrading is permanent and it may cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. Please see [here](cache-how-to-import-export-data.md) for details on how to export.
+
+Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Upgrading is permanent and it might cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
> [!NOTE] > Please note, upgrading is not supported on a cache with a geo-replication link, so you will have to manually unlink your cache instances before upgrading.
Azure Cache for Redis supports upgrading your Redis cache server major version f
To upgrade your cache, follow these steps:
+### Upgrade using the Azure portal
+ 1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions. :::image type="content" source="media/cache-private-link/4-search-for-cache.png" alt-text="Search for Azure Cache for Redis.":::
To upgrade your cache, follow these steps:
1. If your cache instance is eligible to be upgraded, you should see the following blue banner. If you wish to proceed, select the text in the banner. :::image type="content" source="media/cache-how-to-version/blue-banner-upgrade-cache.png" alt-text="Blue banner that says you can upgrade your Redis 6 cache with additional features and commands that enhance developer productivity and ease of use. Upgrading your cache instance cannot be reversed.":::
-
-1. A dialog box will then popup notifying you that upgrading is permanent and may cause a brief connection blip. Select yes if you would like to upgrade your cache instance.
+
+1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes** if you would like to upgrade your cache instance.
:::image type="content" source="media/cache-how-to-version/dialog-version-upgrade.png" alt-text="Dialog with more information about upgrading your cache.":::
To upgrade your cache, follow these steps:
:::image type="content" source="media/cache-how-to-version/upgrade-status.png" alt-text="Overview shows status of cache being upgraded.":::
+### Upgrade using Azure CLI
+
+To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
+
+```azurecli-interactive
+az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6
+```
+
+### Upgrade using PowerShell
+
+To upgrade a cache from 4 to 6 using PowerShell, use the following command:
+
+```powershell-interactive
+Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6"
+```
+ ## FAQ ### What features aren't supported with Redis 6?
-At this time, Redis 6 does not support ACL, and geo-replication between a Redis 4 and 6 cache.
+At this time, Redis 6 doesn't support ACL, and geo-replication between a Redis 4 and 6 cache.
### Can I change the version of my cache after it's created?
-You can upgrade your existing Redis 4 caches to Redis 6, please see [here](#upgrade-an-existing-redis-4-cache-to-redis-6) for details. Please note upgrading your cache instance is permanent and you cannot downgrade your Redis 6 caches to Redis 4 caches.
+You can upgrade your existing Redis 4 caches to Redis 6, see [here](#upgrade-an-existing-redis-4-cache-to-redis-6) for details. Upgrading your cache instance is permanent and you cannot downgrade your Redis 6 caches to Redis 4 caches.
## Next Steps - To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)-- To learn more about Azure Cache for Redis features:-
-> [!div class="nextstepaction"]
-> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-edge-hardware-center Azure Edge Hardware Center Create Order https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-edge-hardware-center/azure-edge-hardware-center-create-order.md
Previously updated : 01/03/2022 Last updated : 05/04/2022 # Customer intent: As an IT admin, I need to understand how to create an order via the Azure Edge Hardware Center.
Before you begin:
For information on how to register, go to [Register resource provider](../databox-online/azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#register-resource-providers). -- Make sure that all the other prerequisites related to the product that you are ordering are met. For example, if ordering Azure Stack Edge device, ensure that all the [Azure Stack Edge prerequisites](../databox-online/azure-stack-edge-gpu-deploy-prep.md#prerequisites) are completed.
+- Make sure that all the other prerequisites related to the product that you're ordering are met. For example, if ordering Azure Stack Edge device, ensure that all the [Azure Stack Edge prerequisites](../databox-online/azure-stack-edge-gpu-deploy-prep.md#prerequisites) are completed.
## Create an order
azure-edge-hardware-center Azure Edge Hardware Center Manage Order https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-edge-hardware-center/azure-edge-hardware-center-manage-order.md
Previously updated : 01/03/2022 Last updated : 06/01/2022 # Use the Azure portal to manage your Azure Edge Hardware Center orders
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
func new --template "Http Trigger" --name MyHttpTrigger
This example creates a Queue Storage trigger named `MyQueueTrigger`: ```
-func new --template "Queue Trigger" --name MyQueueTrigger
+func new --template "Azure Queue Storage Trigger" --name MyQueueTrigger
``` To learn more, see the [`func new` command](functions-core-tools-reference.md#func-new).
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
After your application is registered with Azure AD, you can create work items in
Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, Activity Log alerts, and Azure Log Analytics alerts in the Azure portal. To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md).
+> [!NOTE]
+> If you are using Log Serch alert notice that the query should project a ΓÇ£ComputerΓÇ¥ column with the configurtaion items list in order to have them as a part of the payload.
+ To add a webhook to an action, follow these instructions for Secure Webhook: 1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs
description: Learn how to create annotations to track deployment or other significant events with Application Insights. Last updated 07/20/2021
-ms.reviwer: casocha
++ # Release annotations for Application Insights
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Last updated 11/23/2016 ms.devlang: csharp, javascript, python
-ms.reviwer: cithomas
+ # Filter and preprocess telemetry in the Application Insights SDK
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Last updated 05/16/2022 ms.devlang: csharp, java, javascript, python -+ # Application Map: Triage Distributed Applications
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
ms.devlang: csharp Last updated 10/12/2021+ # Application Insights for ASP.NET Core applications
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Last updated 08/26/2020 ms.devlang: csharp + # Dependency Tracking in Azure Application Insights
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
ms.devlang: csharp Last updated 05/19/2021+ + # Diagnose exceptions in web apps with Application Insights Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server, so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
ms.devlang: csharp Last updated 05/08/2019+ + # Explore .NET/.NET Core and Python trace logs in Application Insights Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search them. Those logs are merged with the other log files from your application, so you can identify traces that are associated with each user request and correlate them with other events and exception reports.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
ms.devlang: csharp Last updated 05/21/2020+ + # Troubleshooting no data - Application Insights for .NET/.NET Core [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Application Insights data
description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data Last updated 05/20/2019-+
+ms.pmowner: vitalyg
# Automate custom reports with Application Insights data
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
description: Learn how to set up web tests in Application Insights. Get alerts i
Last updated 06/19/2019 + # Availability alerts
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Last updated 05/22/2019 ms.devlang: csharp -
+ms.pmowner: casocha
+ # Configuring the Application Insights SDK with ApplicationInsights.config or .xml
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Legacy table: availability
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: browserTimings
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: dependencies
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: customEvents
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: customMetrics
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: pageViews
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: performanceCounters
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |category|string|Category|string|
Legacy table: requests
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: exceptions
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |assembly|string|Assembly|string|
Legacy table: traces
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
+
+ Title: Application Insights SDK support guidance
+description: Support guidance for Application Insights legacy and preview SDKs
++ Last updated : 03/24/2022+++
+# Application Insights SDK support guidance
+
+Microsoft announces feature deprecations or breaking changes at least three years in advance and strives to provide a seamless process for migration to the replacement experience.
+
+The [Microsoft Azure SDK lifecycle policy](https://docs.microsoft.com/lifecycle/faq/azure) is followed when features are enhanced in a new SDK or before an SDK is designated as legacy. Microsoft strives to retain legacy SDK functionality, but newer features may not be available with older versions.
+
+> [!NOTE]
+> Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
+
+Support engineers are expected to provide SDK update guidance according to the following table, referencing the current SDK version in use and any alternatives.
+
+|Current SDK version in use |Alternative version available |Update policy for support |
+||||
+|Stable and less than one year old | Newer supported stable version | **UPDATE RECOMMENDED** |
+|Stable and more than one year old | Newer supported stable version | **UPDATE REQUIRED** |
+|Unsupported ([support policy](https://docs.microsoft.com/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
+|Preview | Stable version | **UPDATE REQUIRED** |
+|Preview | Older stable version | **UPDATE RECOMMENDED** |
+|Preview | Newer preview version, no older stable version | **UPDATE RECOMMENDED** |
+
+> [!TIP]
+> Switching to [auto-instrumentation](codeless-overview.md) eliminates the need for manual SDK updates.
+
+> [!WARNING]
+> Only commercially reasonable support is provided for Preview versions of the SDK. If a support incident requires escalation to development for further guidance, customers will be asked to use a fully supported SDK version to continue support. Commercially reasonable support does not include an option to engage Microsoft product development resources; technical workarounds may be limited or not possible.
+
+To see the current version of Application Insights SDKs and previous versions release dates, reference the [release notes](release-notes.md).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. | | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+ | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
azure-netapp-files Azure Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-policy-definitions.md
+
+ Title: Azure Policy definitions for Azure NetApp Files | Microsoft Docs
+description: Describes the Azure Policy custom definitions and built-in definitions that you can use with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 06/02/2022++
+# Azure Policy definitions for Azure NetApp Files
+
+[Azure Policy](../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
+
+Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. Policy definitions for these common use cases are already available in your Azure environment as built-ins to help you get started.
+
+The process of [creating and implementing a policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md) begins with creating a (built-in or custom) [policy definition](../governance/policy/overview.md#policy-definition). Every policy definition has conditions under which it's enforced. It also has a defined [***effect***](../governance/policy/concepts/effects.md) that takes place if the conditions are met. Azure NetApp Files is supported with both Azure Policy custom and built-in policy definitions.
+
+## Custom policy definitions
+
+Azure NetApp Files supports Azure Policy. You can integrate Azure NetApp Files with Azure Policy through [creating custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md). You can find examples in [Enforce Snapshot Policies with Azure Policy](https://anfcommunity.com/2021/08/30/enforce-snapshot-policies-with-azure-policy/) and [Azure Policy now available for Azure NetApp Files](https://anfcommunity.com/2021/04/19/azure-policy-now-available-for-azure-netapp-files/).
+
+## Built-in policy definitions
+
+The Azure Policy built-in definitions for Azure NetApp Files enable organization admins to restrict creation of unsecure volumes or audit existing volumes. Each policy definition in Azure Policy has a single *effect*. That effect determines what happens when the policy rule is evaluated to match.
+
+The following effects of Azure Policy can be used with Azure NetApp Files:
+
+* *Deny* creation of non-compliant volumes
+* *Audit* existing volumes for compliance
+* *Disable* a policy definition
+
+The following Azure Policy built-in definitions are available for use with Azure NetApp Files:
+
+* *Azure NetApp Files volumes should not use NFSv3 protocol type.*
+ This policy definition disallows the use of the NFSv3 protocol type to prevent unsecure access to volumes. NFSv4.1 or NFSv4.1 with Kerberos protocol should be used to access NFS volumes to ensure data integrity and encryption.
+
+* *Azure NetApp Files volumes of type NFSv4.1 should use Kerberos data encryption.*
+ This policy definition allows only the use of Kerberos privacy (`krb5p`) security mode to ensure that data is encrypted.
+
+* *Azure NetApp Files volumes of type NFSv4.1 should use Kerberos data integrity or data privacy.*
+ This policy definition ensures that either Kerberos integrity (`krb5i`) or Kerberos privacy (`krb5p`) is selected to ensure data integrity and data privacy.
+
+* *Azure NetApp Files SMB volumes should use SMB3 encryption.*
+ This policy definition disallows the creation of SMB volumes without SMB3 encryption to ensure data integrity and data privacy.
+
+To learn how to assign a policy to resources and view compliance report, see [Assign the Policy](../storage/common/transport-layer-security-configure-minimum-version.md#assign-the-policy).
+
+## Next steps
+
+* [Azure Policy documentation](/azure/governance/policy/)
azure-netapp-files Faq Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md
Previously updated : 10/11/2021 Last updated : 06/02/2022 # Integration FAQs for Azure NetApp Files
You can mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. Yo
Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is supported in [all AVS and ANF enabled regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware,netapp).
-## Does Azure NetApp Files work with Azure Policy?
-
-Yes. Azure NetApp Files is a first-party service. It fully adheres to Azure Resource Provider standards. As such, Azure NetApp Files can be integrated into Azure Policy via *custom policy definitions*. For information about how to implement custom policies for Azure NetApp Files, see
-[Azure Policy now available for Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure/azure-policy-now-available-for-azure-netapp-files/m-p/2282258) on Microsoft Tech Community.
- ## Which Unicode Character Encoding is supported by Azure NetApp Files for the creation and display of file and directory names? Azure NetApp Files only supports file and directory names that are encoded with the UTF-8 Unicode Character Encoding format for both NFS and SMB volumes.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 05/25/2022 Last updated : 06/02/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## June 2022
+
+* [Azure Policy built-in definitions for Azure NetApp](azure-policy-definitions.md#built-in-policy-definitions)
+
+ Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
+ ## May 2022 * [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
Last updated 03/25/2022 + # Connection string in Azure SignalR Service Connection string is an important concept that contains information about how to connect to SignalR service. In this article, you'll learn the basics of connection string and how to configure it in your application.
Connection string is an important concept that contains information about how to
When an application needs to connect to Azure SignalR Service, it will need the following information:
-* The HTTP endpoint of the SignalR service instance
-* How to authenticate with the service endpoint
+- The HTTP endpoint of the SignalR service instance
+- How to authenticate with the service endpoint
+
+Connection string contains such information.
+
+## What connection string looks like
+
+A connection string consists of a series of key/value pairs separated by semicolons(;) and we use an equal sign(=) to connect each key and its value. Keys aren't case sensitive.
-Connection string contains such information. To see how a connection string looks like, you can open a SignalR service resource in Azure portal and go to "Keys" tab. You'll see two connection strings (primary and secondary) in the following format:
+For example, a typical connection string may look like this:
``` Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0; ```
-> [!NOTE]
-> Besides portal, you can also use Azure CLI to get the connection string:
->
-> ```bash
-> az signalr key list -g <resource_group> -n <resource_name>
-> ```
- You can see in the connection string, there are two main information:
-* `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource
-* `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
+- `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource
+- `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
->[!NOTE]
-> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+The following table lists all the valid names for key/value pairs in the connection string.
+
+| key | Description | Required | Default value | Example value |
+| -- | -- | -- | -- | |
+| Endpoint | The URI of your ASRS instance. | Y | N/A | https://foo.service.signalr.net |
+| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 |
+| Version | The version of given connection string. | N | 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | https://foo.bar |
+| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app |
+
+### Use AccessKey
+
+Local auth method will be used when `AuthType` is set to null.
+
+| key | Description | Required | Default value | Example value |
+| | - | -- | - | - |
+| AccessKey | The key string in base64 format for building access token usage. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+
+### Use Azure Active Directory
+
+Azure AD auth method will be used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+
+| key | Description | Required | Default value | Example value |
+| -- | | -- | - | |
+| ClientId | A guid represents an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
+| TenantId | A guid represents an organization in Azure Active Directory. | N | null | `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
+| ClientCertPath | The absolute path of a cert file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
+
+Different `TokenCredential` will be used to generate Azure AD tokens with the respect of params you have given.
+
+- `type=azure`
+
+ [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) will be used.
+
+ ```
+ Endpoint=xxx;AuthType=azure
+ ```
+
+- `type=azure.msi`
+
+ 1. User-assigned managed identity will be used if `clientId` has been given in connection string.
+
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000
+ ```
+
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+
+ 2. Otherwise system-assigned managed identity will be used.
+
+ ```
+ Endpoint=xxx;AuthType=azure.msi;
+ ```
+
+ - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+
+
+- `type=azure.app`
+
+ `clientId` and `tenantId` are required to use [Azure AD application with service principal](/azure/active-directory/develop/howto-create-service-principal-portal).
+
+ 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) will be used if `clientSecret` is given.
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientScret=******
+ ```
+
+ 2. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) will be used if `clientCertPath` is given.
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientCertPath=/path/to/cert
+ ```
+
+## How to get my connection strings
-## Other authentication types
+### From Azure portal
-Besides access key, SignalR service also supports other types of authentication methods in connection string.
+Open your SignalR service resource in Azure portal and go to `Keys` tab.
-### Azure Active Directory Application
+You'll see two connection strings (primary and secondary) in the following format:
+
+> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
+
+### From Azure CLI
+
+You can also use Azure CLI to get the connection string:
+
+```bash
+az signalr key list -g <resource_group> -n <resource_name>
+```
+
+### For using Azure AD application
You can use [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=aad`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
+To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0;
``` For more information about how to authenticate using Azure AD application, see this [article](signalr-howto-authorize-application.md).
-### Managed identity
+### For using Managed identity
You can also use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
-There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=aad` to the connection string:
+There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=azure.msi` to the connection string:
```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;Version=1.0;
``` SignalR service SDK will automatically use the identity of your app server.
SignalR service SDK will automatically use the identity of your app server.
To use user assigned identity, you also need to specify the client ID of the managed identity: ```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;ClientId=<client_id>;Version=1.0;
``` For more information about how to configure managed identity, see this [article](signalr-howto-authorize-managed-identity.md).
For more information about how to configure managed identity, see this [article]
> [!NOTE] > It's highly recommended to use Azure AD to authenticate with SignalR service as it's a more secure way comparing to using access key. If you don't use access key authentication at all, consider to completely disable it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access key, it's highly recommended to rotate them regularly (more information can be found [here](signalr-howto-key-rotation.md)).
+### Use connection string generator
+
+It may be cumbersome and error-prone to build connection strings manually.
+
+To avoid making mistakes, we built a tool to help you generate connection string with Azure AD identities like `clientId`, `tenantId`, etc.
+
+To use connection string generator, open your SignalR resource in Azure portal, go to `Connection strings` tab:
++
+In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
+
+> [!NOTE]
+> Everything you input on this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
+
+> [!NOTE]
+> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+ ## Client and server endpoints Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
-But in some applications there may be an additional component in front of SignalR service and all client connections need to go through that component first (to gain additional benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
+But in some applications there may be an extra component in front of SignalR service and all client connections need to go through that component first (to gain extra benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
Similarly, when server wants to make [server connections](signalr-concept-intern
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0; ```
-## Use connection string generator
-
-It may be cumbersome and error-prone to compose connection string manually. In Azure portal, there is a tool to help you generate connection string with additional information like client endpoint and auth type.
-
-To use connection string generator, open the SignalR resource in Azure portal, go to "Connection strings" tab:
--
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
-
-> [!NOTE]
-> Everything you input in this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
- ## Configure connection string in your application There are two ways to configure connection string in your application.
services.AddSignalR().AddAzureSignalR("<connection_string>");
Or you can call `AddAzureSignalR()` without any arguments, then service SDK will read the connection string from a config named `Azure:SignalR:ConnectionString` in your [config providers](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the config is usually stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
+In a local development environment, the config is stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
-* Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)
-* Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
+- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)
+- Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up config provider for those services.
In production environment, you can use other Azure services to manage config/sec
### Configure multiple connection strings
-Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
+Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections, which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
There are also two ways to configure multiple instances:
-* Through code
+- Through code
- ```cs
- services.AddSignalR().AddAzureSignalR(options =>
- {
- options.Endpoints = new ServiceEndpoint[]
- {
- new ServiceEndpoint("<connection_string_1>", name: "name_a"),
- new ServiceEndpoint("<connection_string_2>", name: "name_b", type: EndpointType.Primary),
- new ServiceEndpoint("<connection_string_3>", name: "name_c", type: EndpointType.Secondary),
- };
- });
- ```
+ ```cs
+ services.AddSignalR().AddAzureSignalR(options =>
+ {
+ options.Endpoints = new ServiceEndpoint[]
+ {
+ new ServiceEndpoint("<connection_string_1>", name: "name_a"),
+ new ServiceEndpoint("<connection_string_2>", name: "name_b", type: EndpointType.Primary),
+ new ServiceEndpoint("<connection_string_3>", name: "name_c", type: EndpointType.Secondary),
+ };
+ });
+ ```
- You can assign a name and type to each service endpoint so you can distinguish them later.
+ You can assign a name and type to each service endpoint so you can distinguish them later.
-* Through config
+- Through config
- You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
+ You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
- ```bash
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_b:primary <connection_string_2>
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3>
- ```
+ ```bash
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_b:primary <connection_string_2>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3>
+ ```
- You can also assign name and type to each endpoint, by using a different config name in the following format:
+ You can also assign name and type to each endpoint, by using a different config name in the following format:
- ```
- Azure:SignalR:ConnectionString:<name>:<type>
- ```
+ ```
+ Azure:SignalR:ConnectionString:<name>:<type>
+ ```
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
+
+ Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts.
+ Last updated : 05/10/2022+++
+# Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+
+[Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-introduction?) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](https://docs.microsoft.com/azure/azure-netapp-files/) documentation.
+
+[Azure VMware Solution](/azure/azure-vmware/introduction) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
+
+> [!IMPORTANT]
+> Azure NetApp Files datastores for Azure VMware Solution hosts is currently in public preview. This version is provided without a service-level agreement and is not recommended for production workloads. Some features may not be supported or may have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site.
+
+Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, no other permissions configured via vSphere are needed.
+
+The following diagram demonstrates a typical architecture of Azure NetApp Files backed NFS datastores attached to an Azure VMware Solution private cloud via ExpressRoute.
++
+## Prerequisites
+
+Before you begin the prerequisites, review the [Performance best practices](#performance-best-practices) section to learn about optimal performance of NFS datastores on Azure NetApp Files volumes.
+
+1. [Deploy Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/deploy-azure-vmware-solution?) private cloud in a configured virtual network. For more information, see [Network planning checklist](/azure/azure-vmware/tutorial-network-checklist) and [Configure networking for your VMware private cloud](https://review.docs.microsoft.com/azure/azure-vmware/tutorial-configure-networking?).
+1. Create an [NFSv3 volume for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-create-volumes) in the same virtual network as the Azure VMware Solution private cloud.
+ 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
+ 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
+
+ `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"`
+
+ `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
+ 1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For optimal performance, it's recommended to use the Ultra tier. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section.
+ 1. Create a volume with **Standard** [network features](/azure/azure-netapp-files/configure-network-features) if available for ExpressRoute FastPath connectivity.
+ 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud.
+ 1. If you're using [export policies](/azure/azure-netapp-files/azure-netapp-files-configure-export-policy) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+
+## Supported regions
+
+Azure VMware Solution currently supports the following regions: East US, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, France Central, Germany West Central, Japan West, North Central US, North Europe, Southeast Asia, Switzerland West, UK South, UK West, US South Central, and West US. The list of supported regions will expand as the preview progresses.
+
+## Performance best practices
+
+There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes.
+
+- Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity.
+- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](/azure/expressroute/expressroute-howto-linkvnet-arm#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](/azure/expressroute/expressroute-about-virtual-network-gateways).
+- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.
+- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](https://docs.microsoft.com/azure/availability-zones/az-overview#availability-zones).
+
+## Attach an Azure NetApp Files volume to your private cloud
+
+### [Portal](#tab/azure-portal)
+
+To attach an Azure NetApp Files volume to your private cloud using Portal, follow these steps:
+
+1. Sign in to the Azure portal.
+1. Select **Subscriptions** to see a list of subscriptions.
+1. From the list, select the subscription you want to use.
+1. Under Settings, select **Resource providers**.
+1. Search for **Microsoft.AVS** and select it.
+1. Select **Register**.
+1. Under **Settings**, select **Preview features**.
+ 1. Verify you're registered for both the `CloudSanExperience` and `AfnDatstoreExperience` features.
+1. Navigate to your Azure VMware Solution.
+Under **Manage**, select **Storage (preview)**.
+1. Select **Connect Azure NetApp Files volume**.
+1. In **Connect Azure NetApp Files volume**, select the **Subscription**, **NetApp account**, **Capacity pool**, and **Volume** to be attached as a datastore.
+
+ :::image type="content" source="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png" alt-text="Image shows the navigation to Connect Azure NetApp Files volume pop-up window." lightbox="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png":::
+
+1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud.
+1. Under **Associated cluster**, select the **Client cluster** to associate the NFS volume as a datastore
+1. Under **Data store**, create a personalized name for your **Datastore name**.
+ 1. When the datastore is created, you should see all of your datastores in the **Storage (preview)**.
+ 2. You'll also notice that the NFS datastores are added in vCenter.
++
+### [Azure CLI](#tab/azure-cli)
+
+To attach an Azure NetApp Files volume to your private cloud using Azure CLI, follow these steps:
+
+1. Verify the subscription is registered to `CloudSanExperience` feature in the **Microsoft.AVS** namespace. If it's not already registered, then register it.
+
+ `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+
+ `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+1. The registration should take approximately 15 minutes to complete. You can also check the status.
+
+ `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state`
+1. If the registration is stuck in an intermediate state for longer than 15 minutes, unregister, then re-register the flag.
+
+ `az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+
+ `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+1. Verify the subscription is registered to `AnfDatastoreExperience` feature in the **Microsoft.AVS** namespace. If it's not already registered, then register it.
+
+ `az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"`
+
+ `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`
+1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension.
+
+ `az extension show --name vmware`
+
+ `az extension list-versions -n vmware`
+
+ `az extension update --name vmware`
+1. If the VMware extension isn't already installed, install it.
+
+ `az extension add --name vmware`
+1. Create a datastore using an existing ANF volume in Azure VMware Solution private cloud cluster.
+
+ `az vmware datastore netapp-volume create --name MyDatastore1 --resource-group MyResourceGroup ΓÇô-cluster Cluster-1 --private-cloud MyPrivateCloud ΓÇô-volume-id /subscriptions/<Subscription Id>/resourceGroups/<Resourcegroup name>/providers/Microsoft.NetApp/netAppAccounts/<Account name>/capacityPools/<pool name>/volumes/<Volume name>`
+1. If needed, you can display the help on the datastores.
+
+ `az vmware datastore -h`
+1. Show the details of an ANF-based datastore in a private cloud cluster.
+
+ `az vmware datastore show --name ANFDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+1. List all of the datastores in a private cloud cluster.
+
+ `az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+++
+## Disconnect an Azure NetApp Files-based datastore from your private cloud
+
+You can use the instructions provided to disconnect an Azure NetApp Files-based (ANF) datastore using either Azure portal or Azure CLI. There's no maintenance window required for this operation. The disconnect action only disconnects the ANF volume as a datastore, it doesn't delete the data or the ANF volume.
+
+**Disconnect an ANF datastore using the Azure Portal**
+
+1. Select the datastore you want to disconnect from.
+1. Right-click on the datastore and select **disconnect**.
+
+**Disconnect an ANF datastore using Azure CLI**
+
+ `az vmware datastore delete --name ANFDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+
+## Next steps
+
+Now that you've attached a datastore on Azure NetApp Files-based NFS volume to your Azure VMware Solution hosts, you can create your VMs. Use the following resources to learn more.
+
+- [Service levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels)
+- Datastore protection using [Azure NetApp Files snapshots](/azure/azure-netapp-files/snapshots-introduction)
+- [About ExpressRoute virtual network gateways](https://docs.microsoft.com/azure/expressroute/expressroute-about-virtual-network-gateways)
+- [Understand Azure NetApp Files backup](/azure/azure-netapp-files/backup-introduction)
+- [Guidelines for Azure NetApp Files network planning](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-network-topologies)
+
+## FAQs
+
+- **Are there any special permissions required to create the datastore with the Azure NetApp Files volume and attach it onto the clusters in a private cloud?**
+
+ No other special permissions are needed. The datastore creation and attachment is implemented via Azure VMware Solution control plane.
+
+- **Which NFS versions are supported?**
+
+ NFSv3 is supported for datastores on Azure NetApp Files.
+
+- **Should Azure NetApp Files be in the same subscription as the private cloud?**
+
+ It's recommended to create the Azure NetApp Files volumes for the datastores in the same VNet that has connectivity to the private cloud.
+
+- **How many datastores are we supporting with Azure VMware Solution?**
+
+ The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+- **What latencies and bandwidth can be expected from the datastores backed by Azure NetApp Files?**
+
+ We're currently validating and working on benchmarking. For now, follow the [Performance best practices](#performance-best-practices) outlined in this article.
+
+- **What are my options for backup and recovery?**
+
+ Azure NetApp Files (ANF) supports [snapshots](/azure/azure-netapp-files/azure-netapp-files-manage-snapshots) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
+
+- **How do I monitor Storage Usage?**
+
+ Use [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics) to monitor storage and performance usage for the Datastore volume and to set alerts.
+
+- **What metrics are available for monitoring?**
+
+ Usage and performance metrics are available for monitoring the Datastore volume. Replication metrics are also available for ANF datastore that can be replicated to another region using Cross Regional Replication. For more information about metrics, see [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics).
+
+- **What happens if a new node is added to the cluster, or an existing node is removed from the cluster?**
+
+ When you add a new node to the cluster, it will automatically gain access to the datastore. Removing an existing node from the cluster won't affect the datastore.
+
+- **How are the datastores charged, is there an additional charge?**
+
+ Azure NetApp Files NFS volumes that are used as datastores will be billed following the [capacity pool based billing model](/azure/azure-netapp-files/azure-netapp-files-cost-model). Billing will depend on the service level. There's no extra charge for using Azure NetApp Files NFS volumes as datastores.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 07/29/2021 Last updated : 06/06/2022 # Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter Server and restricted administrator rights for NSX-T Manager.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter Server and the administrator role (with restricted permissions) is used for NSX-T Manager.
## vCenter Server access and identity [!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)] > [!IMPORTANT]
-> Azure VMware Solution offers custom roles on vCenter Server but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter Server](#create-custom-roles-on-vcenter-server) section later in this article.
+> Azure VMware Solution offers custom roles on vCenter Server but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter Server](#create-custom-roles-on-vcenter-server) section later in this article.
### View the vCenter privileges You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
-1. Sign in to the vSphere Client and go to **Menu** > **Administration**.
-
+1. Sign into the vSphere Client and go to **Menu** > **Administration**.
1. Under **Access Control**, select **Roles**.-
-1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
+1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
:::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="Screenshot showing the roles and privileges for CloudAdmin in the vSphere Client.":::
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
### Create custom roles on vCenter Server
-Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
-You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin, but you can't assign the role to any users or groups or delete the role.
+You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin. You can't assign the role to any users or groups or delete the role.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Select the **CloudAdmin** role and select the **Clone role action** icon.
- >[!NOTE]
+ >[!NOTE]
>Don't clone the **Administrator** role because you can't use it. Also, the custom role created can't be deleted by cloudadmin\@vsphere.local. 1. Provide the name you want for the cloned role. 1. Add or remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list. - #### Apply a custom role 1. Navigate to the object that requires the added permission. For example, to apply permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
## NSX-T Manager access and identity
->[!NOTE]
->NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] is currently supported for all new private clouds.
+When a private cloud is provisioned using Azure portal, Software Defined Data Center (SDDC) management components like vCenter and NSX-T Manager are provisioned for customers.
+
+Microsoft is responsible for the lifecycle management of NSX-T appliances like NSX-T Managers and NSX-T Edges. They're responsible for bootstrapping network configuration, like creating the Tier-0 gateway.
+
+You're responsible for NSX-T software-defined networking (SDN) configuration, for example:
+
+- Network segments
+- Other Tier-1 gateways
+- Distributed firewall rules
+- Stateful services like gateway firewall
+- Load balancer on Tier-1 gateways
+
+You can access NSX-T Manager using the built-in local user "admin" assigned to **Enterprise admin** role that gives full privileges to a user to manage NSX-T. While Microsoft manages the lifecycle of NSX-T, certain operations aren't allowed by a user. Operations not allowed include editing the configuration of host and edge transport nodes or starting an upgrade. For new users, Azure VMware Solution deploys them with a specific set of permissions needed by that user. The purpose is to provide a clear separation of control between the Azure VMware Solution control plane configuration and Azure VMware Solution private cloud user.
+
+For new private cloud deployments (in US West and Australia East) starting **June 2022**, NSX-T access will be provided with a built-in local user `cloudadmin` with a specific set of permissions to use only NSX-T functionality for workloads. The new **cloudadmin** user role will be rolled out in other regions in phases.
+
+> [!NOTE]
+> Admin access to NSX-T will not be provided to users for private cloud deployments created after **June 2022**.
+
+### NSX-T cloud admin user permissions
+
+The following permissions are assigned to the **cloudadmin** user in Azure VMware Solution NSX-T.
+
+| Category | Type | Operation | Permission |
+|--|--|-||
+| Networking | Connectivity | Tier-0 Gateways<br>Tier-1 Gateways<br>Segments | Read-only<br>Full Access<br>Full Access |
+| Networking | Network Services | VPN<br>NAT<br>Load Balancing<br>Forwarding Policy<br>Statistics | Full Access<br>Full Access<br>Full Access<br>Read-only<br>Full Access |
+| Networking | IP Management | DNS<br>DHCP<br>IP Address Pools | Full Access<br>Full Access<br>Full Access |
+| Networking | Profiles | | Full Access |
+| Security | East West Security | Distributed Firewall<br>Distributed IDS and IPS<br>Identity Firewall | Full Access<br>Full Access<br>Full Access |
+| Security | North South Security | Gateway Firewall<br>URL Analysis | Full Access<br>Full Access |
+| Security | Network Introspection | | Read-only |
+| Security | Endpoint Protection | | Read-only |
+| Security | Settings | | Full Access |
+| Inventory | | | Full Access |
+| Troubleshooting | IPFIX | | Full Access |
+| Troubleshooting | Port Mirroring | | Full Access |
+| Troubleshooting | Traceflow | | Full Access |
+| System | Configuration<br>Settings<br>Settings<br>Settings | Identity firewall<br>Users and Roles<br>Certificate Management<br>User Interface Settings | Full Access<br>Full Access<br>Full Access<br>Full Access |
+| System | All other | | Read-only |
++
+You can view the permissions granted to the Azure VMware Solution CloudAdmin role using the following steps:
+
+1. Log in to the NSX-T Manager.
+1. Navigate to **Systems** > **Users and Roles** and locate **User Role Assignment**.
+1. The **Roles** column for the CloudAdmin user provides information on the NSX role-based access control (RBAC) roles assigned.
+1. Select the the **Roles** tab to view specific permissions associated with each of the NSX RBAC roles.
+1. To view **Permissions**, expand the **CloudAdmin** role and select a category like, Networking or Security.
-Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) gateways, segments (logical switches), and all services. In addition, the privileges give you access to the NSX-T Tier-0 (T0) gateway. A change to the T0 gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 gateway.
+> [!NOTE]
+> The current Azure VMware Solution with **NSX-T admin user** will eventually switch from **admin** user to **cloudadmin** user. You'll receive a notification through Azure Service Health that includes the timeline of this change so you can change the NSX-T credentials you've used for the other integration.
-
## Next steps Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about:
azure-web-pubsub Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-metrics.md
Metrics provide the running info of the service. The available metrics are:
|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions| |Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions| |Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
+|Server Load|Percent|Max / Avg|The percentage of server load|No Dimensions|
### Understand Dimensions
azure-web-pubsub Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-performance.md
One of the key benefits of using Azure Web PubSub Service is the ease of scaling
In this guide, we'll introduce the factors that affect Web PubSub upstream application performance. We'll describe typical performance in different use-case scenarios.
+## Quick evaluation using metrics
+ Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
+
+ <kbd>![Screenshot of the Server Load metric of Azure Web PubSub on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/concept-performance/server-load.png "Server Load")</kbd>
++
+ It shows the computing pressure of your Azure Web PubSub service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside Azure Web PubSub service would remain low if the Server Load is below 70%.
+
+> [!NOTE]
+> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100), you need to check [sending to small group](#small-group) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
+
+ Below are detailed concepts for evaluating performance.
## Term definitions *Inbound*: The incoming message to Azure Web PubSub Service.
The bandwidth limit is the same as that for **send to big group**.
## Next steps
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 05/12/2022 Last updated : 06/06/2022
Archive tier supports the following workloads:
| Azure Virtual Machines | Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. <br><br> Age >= 3 months in Vault-standard tier <br><br> Retention left >= 6 months. <br><br> No active daily and weekly dependencies. | | SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | Only full recovery points. Logs and differentials aren't supported. <br><br> Age >= 45 days in Vault-standard tier. <br><br> Retention left >= 6 months. <br><br> No dependencies. |
+A recovery point becomes archivable only if all the above conditions are met.
+ >[!Note] >- Archive tier support for Azure Virtual Machines, SQL Servers in Azure VMs and SAP HANA in Azure VM is now generally available in multiple regions. For the detailed list of supported regions, see the [support matrix](#support-matrix). >- Archive tier support for Azure Virtual Machines for the remaining regions is in limited public preview. To sign up for limited public preview, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
If you delete recovery points that haven't stayed in archive for a minimum of 18
Stop protection and delete data deletes all recovery points. For recovery points in archive that haven't stayed for a duration of 180 days in archive tier, deletion of recovery points leads to early deletion cost.
-## Archive Tier pricing
+## Archive tier pricing
You can view the Archive tier pricing from our [pricing page](azure-backup-pricing.md).
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Previously updated : 02/25/2022 Last updated : 06/05/2022 #Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
In this quickstart, you deploy Bastion from your virtual machine settings in the
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
-1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion**.
+1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Create Azure Bastion using defaults**.
:::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png"::: 1. Bastion begins deploying. This can take around 10 minutes to complete.
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/override-sku.md
Setting the **allowModelOverride** property to `true` here will update the cloud
"packageUrl": "[parameters('packageSasUri')]", "configurationUrl": "[parameters('configurationSasUri')]", "upgradeMode": "[parameters('upgradeMode')]",
- ΓÇ£allowModelOverrideΓÇ¥ : true,
+ "allowModelOverride": true,
"roleProfile": { "roles": [ {
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Use the following table to determine supported styles and roles for each neural
|zh-CN-XiaomoNeural|`affectionate`, `angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `envious`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported| |zh-CN-XiaoruiNeural|`angry`, `calm`, `fearful`, `sad`|Supported|| |zh-CN-XiaoshuangNeural|`chat`|Supported||
-|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `sad`, `serious`|Supported||
+|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `poetry-reading`, `sad`, `serious`|Supported||
|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported| |zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported| |zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported||
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
|`style="newscast"`|Expresses a formal and professional tone for narrating news.| |`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.| |`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
+|`style="poetry-reading"`|Expresses an emotional and rhythmic tone while reading a poem.|
|`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.| |`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Azure Cognitive Service for Language provides the following features:
> | [Custom NER](custom-named-entity-recognition/overview.md) | Build an AI model to extract custom entity categories, using unstructured text that you provide. | * [Language Studio](custom-named-entity-recognition/quickstart.md?pivots=language-studio) <br> * [REST API](custom-named-entity-recognition/quickstart.md?pivots=rest-api) | > | [Analyze sentiment and opinions](sentiment-opinion-mining/overview.md) | This pre-configured feature provides sentiment labels (such as "*negative*", "*neutral*" and "*positive*") for sentences and documents. This feature can additionally provide granular information about the opinions related to words that appear in the text, such as the attributes of products or services. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](sentiment-opinion-mining/quickstart.md) <br> * [Docker container](sentiment-opinion-mining/how-to/use-containers.md) > |[Language detection](language-detection/overview.md) | This pre-configured feature evaluates text, and determines the language it was written in. It returns a language identifier and a score that indicates the strength of the analysis. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](language-detection/quickstart.md) <br> * [Docker container](language-detection/how-to/use-containers.md) |
-> |[Custom text classification (preview)](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
+> |[Custom text classification](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
> | [Document summarization (preview)](summarization/overview.md?tabs=document-summarization) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](summarization/quickstart.md) | > | [Conversation summarization (preview)](summarization/overview.md?tabs=conversation-summarization) | This pre-configured feature summarizes issues and summaries in transcripts of customer-service conversations. | * [Language Studio](language-studio.md) <br> * [REST API](summarization/quickstart.md?tabs=rest-api) |
-> | [Conversational language understanding (preview)](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md)
+> | [Conversational language understanding](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md)
> | [Question answering](question-answering/overview.md) | This pre-configured feature provides answers to questions extracted from text input, using semi-structured content such as: FAQs, manuals, and documents. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](question-answering/quickstart/sdk.md) | > | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) |
cognitive-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-use-cases.md
For more information, see:
|**Rewards**| A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history. | |**Exploration**| The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring. |
-For more information, and additional key terms, please refer to the [Personalizer Terminology](/terminology.md) and [conceptual documentation](how-personalizer-works.md).
+For more information, and additional key terms, please refer to the [Personalizer Terminology](terminology.md) and [conceptual documentation](how-personalizer-works.md).
## Example use cases
communication-services Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/government.md
+
+ Title: Azure Communication Services in Azure Government
+description: Learn about using Azure Communication Services in US Government regions
++++ Last updated : 06/02/2022++++++++
+# Azure Communication Services for US Government
++
+Azure Communication Services can be used within [Azure Government](https://azure.microsoft.com/global-infrastructure/government/) to provide compliance with US government requirements for cloud services. In addition to enjoying the features and capabilities of Messaging, Voice and Video calling, developers benefit from the following features that are unique to Azure Government:
+- Your personal data is logically segregated from customer content in the commercial Azure cloud.
+- Your resourceΓÇÖs customer content is stored within the United States.
+- Access to your organization's customer content is restricted to screened Microsoft personnel.
+
+You can find more information about the Office 365 Government ΓÇô GCC High offering for US Government customers at [Office 365 Government plans](https://products.office.com/government/compare-office-365-government-plans). Please see [eligibility requirements](https://azure.microsoft.com/global-infrastructure/government/how-to-buy/) for Azure Government.
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
+
+ Title: How to verify if your application is running in a web browser supported by Azure Communication Services
+description: Learn how to get current browser environment details using the Azure Communication Services Calling SDK for JavaScript
++ Last updated : 05/27/2022++++
+# How to verify if your application is running in a web browser supported by Azure Communication Services
+
+There are many different browsers available in the market today, but not all of them can properly support audio and video calling. To determine if the browser your application is running on is a supported browser you can use the `getEnvironmentInfo` to check for browser support.
+
+A `CallClient` instance is required for this operation. When you have a `CallClient` instance, you can use the `getEnvironmentInfo` method on the `CallClient` instance to obtain details about the current environment of your app:
++
+```javascript
+const callClient = new CallClient(options);
+const environmentInfo = await callClient.getEnvironmentInfo();
+```
+
+The `getEnvironmentInfo` method asynchronously returns an object of type `EnvironmentInfo`.
+
+- The `EnvironmentInfo` type is defined as:
+
+```javascript
+{
+ environment: Environment;
+ isSupportedPlatform: boolean;
+ isSupportedBrowser: boolean;
+ isSupportedBrowserVersion: boolean;
+ isSupportedEnvironment: boolean;
+}
+```
+- The `Environment` type within the `EnvironmentInfo` type is defined as:
+
+```javascript
+{
+ platform: string;
+ browser: string;
+ browserVersion: string;
+}
+```
+
+A supported environment is a combination of an operating system, a browser, and the minimum version required for that browser.
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md
Create an agent pool by using the [az acr agentpool create][az-acr-agentpool-cre
```azurecli az acr agentpool create \
+ --registry MyRegistry \
--name myagentpool \ --tier S2 ```
Scale the pool size up or down with the [az acr agentpool update][az-acr-agentpo
```azurecli az acr agentpool update \
+ --registry MyRegistry \
--name myagentpool \ --count 2 ```
subnetId=$(az network vnet subnet show \
--query id --output tsv) az acr agentpool create \
+ --registry MyRegistry \
--name myagentpool \ --tier S2 \ --subnet-id $subnetId
Queue a quick task on the agent pool by using the [az acr build][az-acr-build] c
```azurecli az acr build \
+ --registry MyRegistry \
--agent-pool myagentpool \ --image myimage:mytag \ --file Dockerfile \
For example, create a scheduled task on the agent pool with [az acr task create]
```azurecli az acr task create \
+ --registry MyRegistry \
--name mytask \ --agent-pool myagentpool \ --image myimage:mytag \
To verify task setup, run [az acr task run][az-acr-task-run]:
```azurecli az acr task run \
+ --registry MyRegistry \
--name mytask ```
To find the number of runs currently scheduled on the agent pool, run [az acr ag
```azurecli az acr agentpool show \
+ --registry MyRegistry \
--name myagentpool \ --queue-count ```
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Title: Azure Cosmos DB resource model description: This article describes Azure Cosmos DB resource model which includes the Azure Cosmos account, database, container, and the items. It also covers the hierarchy of these elements in an Azure Cosmos account. --+++ Last updated 07/12/2021-- # Azure Cosmos DB resource model
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Last updated 03/24/2022 -+ # What is Azure Cosmos DB analytical store?
cosmos-db Attachments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/attachments.md
Last updated 08/07/2020-+ # Azure Cosmos DB Attachments
cosmos-db Audit Restore Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md
Last updated 04/18/2022 -+ # Audit the point in time restore action for continuous backup mode in Azure Cosmos DB
cosmos-db Automated Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md
Last updated 08/26/2021-+
cosmos-db Bulk Executor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/bulk-executor-overview.md
Last updated 05/28/2019 -+ # Azure Cosmos DB bulk executor library overview
cosmos-db Apache Cassandra Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/apache-cassandra-consistency-mapping.md
Last updated 03/24/2022-+ # Apache Cassandra and Azure Cosmos DB Cassandra API consistency levels
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-adoption.md
Last updated 03/24/2022-+
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-introduction.md
Title: Introduction to the Azure Cosmos DB Cassandra API
description: Learn how you can use Azure Cosmos DB to "lift-and-shift" existing applications and build new applications by using the Cassandra drivers and CQL -+
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
Title: Apache Cassandra features supported by Azure Cosmos DB Cassandra API
description: Learn about the Apache Cassandra feature support in Azure Cosmos DB Cassandra API -+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Cassandra API description: Azure CLI Samples for Azure Cosmos DB Cassandra API-+ Last updated 02/21/2022-++
cosmos-db Connect Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/connect-spark-configuration.md
Title: Working with Azure Cosmos DB Cassandra API from Spark
description: This article is the main page for Cosmos DB Cassandra API integration from Spark. -+
cosmos-db Create Account Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/create-account-java.md
Title: 'Tutorial: Build Java app to create Azure Cosmos DB Cassandra API account
description: This tutorial shows how to create a Cassandra API account, add a database (also called a keyspace), and add a table to that account by using a Java application. -+
cosmos-db Load Data Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/load-data-table.md
Last updated 05/20/2019 -+ ms.devlang: java #Customer intent: As a developer, I want to build a Java application to load data to a Cassandra API table in Azure Cosmos DB so that customers can store and manage the key/value data and utilize the global distribution, elastic scaling, multi-region , and other capabilities offered by Azure Cosmos DB.
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Cassandra API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Cassandra API.-+ Last updated 9/13/2021-++ # Manage Azure Cosmos DB Cassandra API resources using Bicep
cosmos-db Migrate Data Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-arcion.md
Last updated 04/04/2022-+ # Migrate data from Cassandra to Azure Cosmos DB Cassandra API account using Arcion
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-striim.md
Last updated 12/09/2021 -+ # Migrate data to Azure Cosmos DB Cassandra API account using Striim
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
Title: 'Migrate your data to a Cassandra API account in Azure Cosmos DB- Tutoria
description: In this tutorial, learn how to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB. -+
cosmos-db Oracle Migrate Cosmos Db Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/oracle-migrate-cosmos-db-arcion.md
Last updated 04/04/2022-+ # Migrate data from Oracle to Azure Cosmos DB Cassandra API account using Arcion
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Cassandra API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Cassandra API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Cassandra API
cosmos-db Query Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/query-data.md
description: This tutorial shows how to query user data from an Azure Cosmos DB
-+ Last updated 09/24/2018
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
Last updated 09/03/2021 -+ # Secondary indexing in Azure Cosmos DB Cassandra API
cosmos-db Spark Aggregation Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-aggregation-operations.md
Title: Aggregate operations on Azure Cosmos DB Cassandra API tables from Spark
description: This article covers basic aggregation operations against Azure Cosmos DB Cassandra API tables from Spark -+
cosmos-db Spark Create Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-create-operations.md
Title: Create or insert data into Azure Cosmos DB Cassandra API from Spark
description: This article details how to insert sample data into Azure Cosmos DB Cassandra API tables -+
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
Title: Access Azure Cosmos DB Cassandra API from Azure Databricks
description: This article covers how to work with Azure Cosmos DB Cassandra API from Azure Databricks. -+
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
Title: DDL operations in Azure Cosmos DB Cassandra API from Spark
description: This article details keyspace and table DDL operations against Azure Cosmos DB Cassandra API from Spark. -+
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
Title: Delete operations on Azure Cosmos DB Cassandra API from Spark
description: This article details how to delete data in tables in Azure Cosmos DB Cassandra API from Spark -+
cosmos-db Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-hdinsight.md
Title: Access Azure Cosmos DB Cassandra API on YARN with HDInsight
description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight. -+
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
titleSufix: Azure Cosmos DB
description: This article describes how to read data from Cassandra API tables in Azure Cosmos DB. -+
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
Title: Table copy operations on Azure Cosmos DB Cassandra API from Spark
description: This article details how to copy data between tables in Azure Cosmos DB Cassandra API -+
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
Title: Upsert data into Azure Cosmos DB Cassandra API from Spark
description: This article details how to upsert into tables in Azure Cosmos DB Cassandra API from Spark -+
cosmos-db Templates Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/templates-samples.md
Title: Resource Manager templates for Azure Cosmos DB Cassandra API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Cassandra API. -+ Last updated 10/14/2020-++ # Manage Azure Cosmos DB Cassandra API resources using Azure Resource Manager templates
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Title: Choose an API in Azure Cosmos DB description: Learn how to choose between SQL/Core, MongoDB, Cassandra, Gremlin, and table APIs in Azure Cosmos DB based on your workload requirements.--+++ Last updated 12/08/2021
cosmos-db Common Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-cli-samples.md
Last updated 02/22/2022--+++
cosmos-db Common Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-powershell-samples.md
description: Azure PowerShell Samples common to all Azure Cosmos DB APIs
Last updated 05/02/2022--+++
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/compliance.md
Last updated 09/11/2021-+ # Compliance in Azure Cosmos DB
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Title: Azure Cosmos DB service quotas description: Azure Cosmos DB service quotas and default limits on different resource types.--+++ Last updated 05/30/2022
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
Last updated 12/09/2021 -+
cosmos-db Conflict Resolution Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/conflict-resolution-policies.md
Title: Conflict resolution types and resolution policies in Azure Cosmos DB description: This article describes the conflict categories and conflict resolution policies in Azure Cosmos DB.-+ Last updated 04/20/2020--++ # Conflict types and resolution policies when using multiple write regions
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Title: Consistency levels in Azure Cosmos DB description: Azure Cosmos DB has five consistency levels to help balance eventual consistency, availability, and latency trade-offs.--+++ Last updated 02/17/2022
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Last updated 04/06/2022 -+
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
Last updated 02/28/2022 -+
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
Last updated 03/02/2022-+
The `RestoreParameters` resource contains the restore operation details includin
||| |restoreMode | The restore mode should be *PointInTime* | |restoreSource | The instanceId of the source account from which the restore will be initiated. |
-|restoreTimestampInUtc | Point in time in UTC to which the account should be restored to. |
+|restoreTimestampInUtc | Point in time in UTC to restore the account. |
|databasesToRestore | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database, see the [restorable SQL resources](#restorable-sql-resources) section for more details. If this value is empty, then the entire account is restored. | |gremlinDatabasesToRestore | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. See the [restorable Gremlin resources](#restorable-graph-resources) section for more details. If this value is empty, then the entire account is restored. | |tablesToRestore | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database, see the [restorable Table resources](#restorable-table-resources) section for more details. If this value is empty, then the entire account is restored. |
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp | The time in UTC when the database is created or deleted. | | ownerId | The name of the SQL database. | | ownerResourceId | The resource ID of the SQL database|
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event isn't initiated by the user</li></ul> |
| database |The properties of the SQL database at the time of the event| To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp | The time in UTC when this container event happened.| | ownerId| The name of the SQL container.| | ownerResourceId | The resource ID of the SQL container.|
-| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event isn't initiated by the user</li></ul> |
| container | The properties of the SQL container at the time of the event.| To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
Each resource contains information of a mutation event such as creation and dele
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the MongoDB database. | | ownerResourceId | The resource ID of the MongoDB database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-databases/list) article.
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp |The time in UTC when this collection event happened. | | ownerId| The name of the MongoDB collection. | | ownerResourceId | The resource ID of the MongoDB collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
+To get a list of all container mutations under the same database see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
### Restorable MongoDB resources
Each resource contains information about a mutation event, such as a creation an
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the Graph database. | | ownerResourceId | The resource ID of the Graph database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user. </li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user. </li></ul> |
-To get a event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
+To get an event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
### Restorable Graphs
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp |The time in UTC when this collection event happened. | | ownerId| The name of the Graph collection. | | ownerResourceId | The resource ID of the Graph collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user.</li></ul> |
+| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user.</li></ul> |
To get a list of all container mutations under the same database, see graph [Restorable Graphs - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-graphs/list) article. ### Restorable Table resources
-Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API does not specify an explicit database.
+Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API doesn't specify an explicit database.
|Property Name |Description | ||| | TableNames | The list of Table containers under this account. |
-To get a list of Table that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
+To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
### Restorable Table
Each resource contains information of a mutation event such as creation and dele
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the Table database. | | ownerResourceId | The resource ID of the Table resource. |
-| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
+| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-migrationchoices.md
The following factors determine the choice of the migration tool:
If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). * If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
+>[!IMPORTANT]
+> The [Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator) is an open-source tool for live container migrations that implements change feed and bulk support. However, please note that the user interface application code for this tool is not supported or actively maintained by Microsoft. For Azure Cosmos DB SQL API live container migrations, we recommend using the Spark Connector + Change Feed as illustrated in the [sample](https://github.com/Azure/azure-sdk-for-jav) is fully supported by Microsoft.
+ |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./create-sql-api-spark.md)|Azure Cosmos DB SQL API. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
+|Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB SQL API. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
|Offline|[Custom tool with Cosmos DB bulk executor library](migrate-cosmosdb-data.md)| The source depends on your custom code | Azure Cosmos DB SQL API| &bull; Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>&bull; Suitable for very large datasets (10 TB+). <br/>&bull; Requires custom setup of this tool running as an App Service. | |Online|[Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Does not capture deletes from the source container. | |Online|[Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Provides progress tracking. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Works for larger datasets as well.<br/>&bull; Requires the user to set up an App Service to host the Change feed processor. <br/>&bull; Does not capture deletes from the source container.|
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
-|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/>&bull;Azure Cosmos DB Cassandra API| Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB Cassandra API <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
|Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Cassandra API <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| |Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
For APIs other than the SQL API, Mongo API and the Cassandra API, there are vari
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) * Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md). * The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article.
-* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
+* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
Last updated 04/05/2021 -+
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
Title: Command-line and PowerShell reference for Azure Cosmos DB Emulator
description: Learn the command-line parameters for Azure Cosmos DB Emulator, how to control the emulator with PowerShell, and how to change the number of containers that you can create within the emulator. --+++ Last updated 09/17/2020
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
Last updated 04/08/2022 -+ # Get the latest restorable timestamp for continuous backup accounts
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Gremlin API description: Azure CLI Samples for Azure Cosmos DB Gremlin API-+ Last updated 02/21/2022-++
cosmos-db Graph Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-modeling-tools.md
Last updated 05/25/2021-+ # Third-party data modeling tools for Azure Cosmos DB graph data
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Gremlin API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Gremlin API. -+ Last updated 9/13/2021-++ # Manage Azure Cosmos DB Gremlin API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Gremlin API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Gremlin API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Gremlin API
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/resource-manager-template-samples.md
Title: Resource Manager templates for Azure Cosmos DB Gremlin API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Gremlin API. -+ Last updated 10/14/2020-++ # Manage Azure Cosmos DB Gremlin API resources using Azure Resource Manager templates
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/tutorial-query-graph.md
Last updated 02/16/2022-+ ms.devlang: csharp
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Title: High availability in Azure Cosmos DB description: This article describes how to build a highly available solution using Cosmos DB-+ Last updated 02/24/2022---++ # Achieve high availability with Cosmos DB
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Title: Pricing model of Azure Cosmos DB description: This article explains the pricing model of Azure Cosmos DB and how it simplifies your cost management and cost planning.--+++ Last updated 03/24/2022 - # Pricing model in Azure Cosmos DB
The pricing model of Azure Cosmos DB simplifies the cost management and planning
> > [!VIDEO https://aka.ms/docs.how-pricing-works] -- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you are using.
+- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you're using.
- - **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there is a minimum throughput required on a database or a container to guarantee the SLAs). You are billed hourly for the maximum provisioned throughput for a given hour.
+ - **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there's a minimum throughput required on a database or a container to guarantee the SLAs). You're billed hourly for the maximum provisioned throughput for a given hour.
> [!NOTE] > Because the provisioned throughput model dedicates resources to your container or database, you will be charged for the throughput you have provisioned even if you don't run any workloads.
- - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the amount of Request Units that has been consumed by your database operations.
+ - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.
-- **Storage**: You are billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You are billed only for the storage you consume.
+- **Storage**: You're billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You're billed only for the storage you consume.
The pricing model in Azure Cosmos DB is consistent across all APIs. For more information, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/), [Understanding your Azure Cosmos DB bill](understand-your-bill.md) and [How Azure Cosmos DB pricing model is cost-effective for customers](total-cost-ownership.md).
-If you deploy your Azure Cosmos DB account to a non-government region in the US, there is a minimum price for both database and container-based throughput in provisioned throughput mode. There is no minimum price in serverless mode. The pricing varies depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+If you deploy your Azure Cosmos DB account to a non-government region in the US, there's a minimum price for both database and container-based throughput in provisioned throughput mode. There's no minimum price in serverless mode. The pricing varies depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
## Try Azure Cosmos DB for free Azure Cosmos DB offers many options for developers to it for free. These options include:
-* **Azure Cosmos DB free tier**: Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free, for the lifetime of the account. You can have up to one free tier account per Azure subscription and must opt-in when creating the account. To learn more, see how to [create a free tier account](free-tier.md) article.
+* **Azure Cosmos DB free tier**: Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free, for the lifetime of the account. You can have up to one free tier account per Azure subscription and must opt in when creating the account. To learn more, see how to [create a free tier account](free-tier.md) article.
* **Azure free account**: Azure offers a [free tier](https://azure.microsoft.com/free/) that gives you $200 in Azure credits for the first 30 days and a limited quantity of free services for 12 months. For more information, see [Azure free account](../cost-management-billing/manage/avoid-charges-free-account.md). Azure Cosmos DB is a part of Azure free account. Specifically for Azure Cosmos DB, this free account offers 25-GB storage and 400 RU/s of provisioned throughput for the entire year. * **Try Azure Cosmos DB for free**: Azure Cosmos DB offers a time-limited experience by using try Azure Cosmos DB for free accounts. You can create an Azure Cosmos DB account, create database and collections and run a sample application by using the Quickstarts and tutorials. You can run the sample application without subscribing to an Azure account or using your credit card. [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) offers Azure Cosmos DB for one month, with the ability to renew your account any number of times.
-* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you are satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information, see [Using Azure Cosmos DB for development and testing](local-emulator.md) for more details.
+* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you're satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information about dev/test, see [using Azure Cosmos DB for development and testing](local-emulator.md).
## Pricing with reserved capacity
-Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for a period of one year or three years and you get a discount on the throughput provisioned.
+Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for one year or three years and you get a discount on the throughput provisioned.
-Reserved capacity provides a billing discount and does not affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](cosmos-db-reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
+Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](cosmos-db-reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
## Next steps
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Title: Learn how to manage database accounts in Azure Cosmos DB description: Learn how to manage Azure Cosmos DB resources by using the Azure portal, PowerShell, CLI, and Azure Resource Manager templates-+ Last updated 09/13/2021-++ # Manage an Azure Cosmos account using the Azure portal
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
Title: Move an Azure Cosmos DB account to another region description: Learn how to move an Azure Cosmos DB account to another region.-+ Last updated 03/15/2022-++ # Move an Azure Cosmos DB account to another region
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Last updated 02/16/2022 -+ # Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sql-res
## Initialize the SDK with Azure AD
-To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD (AAD) token on behalf of the identity you wish to use.
+To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD token on behalf of the identity you wish to use.
The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of Azure AD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
Title: 'Tutorial: Database migration tool for Azure Cosmos DB' description: 'Tutorial: Learn how to use the open-source Azure Cosmos DB data migration tools to import data to Azure Cosmos DB from various sources including MongoDB, SQL Server, Table storage, Amazon DynamoDB, CSV, and JSON files. CSV to JSON conversion.'-+++ Last updated 08/26/2021-- + # Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Introduction to Azure Cosmos DB description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.--+++ Last updated 08/26/2021- adobe-target: true
cosmos-db Large Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/large-partition-keys.md
Title: Create Azure Cosmos containers with large partition key description: Learn how to create a container in Azure Cosmos DB with large partition key using Azure portal and different SDKs. -+ Last updated 12/8/2019-++
cosmos-db Latest Restore Timestamp Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/latest-restore-timestamp-continuous-backup.md
Last updated 04/08/2022 -+ # Latest restorable timestamp for Azure Cosmos DB accounts with continuous backup mode
You can use latest restorable timestamp in the following use cases:
* You can get the latest restorable timestamp for a container, database, or an account and use it to trigger the restore. This is the latest timestamp up to which all the data of the specified resource or all its underlying resources has been successfully backed up.
-* You can use this API to identify that your data has been successfully backed up before deleting the account. If the timestamp returned by this API is less than the last write timestamp, then it means that there is some data that has not been backed up yet. In such case, you must call this API until the timestamp becomes equal to or greater than the last write timestamp. If an account exists in multiple locations, you must get the latest restorable timestamp in all the locations to make sure that data has been backed up in all regions before deleting the account.
+* You can use this API to identify that your data has been successfully backed up before deleting the account. If the timestamp returned by this API is less than the last write timestamp, then it means that there's some data that hasn't been backed up yet. In such case, you must call this API until the timestamp becomes equal to or greater than the last write timestamp. If an account exists in multiple locations, you must get the latest restorable timestamp in all the locations to make sure that data has been backed up in all regions before deleting the account.
* You can use this API to monitor that your data is being backed up on time. This timestamp is generally within a few hundred seconds of the current timestamp, although sometimes it can differ by more. ## Semantics
-The latest restorable timestamp for a container is the minimum timestamp upto which all its partitions has taken backup successfully in the given location. This Api calculates the latest restorable timestamp by retrieving the latest backup timestamp for each partition of the given container in given location and returns the minimum of all these timestamps. If the data for all its partitions is backed up and there was no new data written to those partitions, then it will return the maximum of current timestamp and the last data backup timestamp.
+The latest restorable timestamp for a container is the minimum timestamp upto, which all its partitions have taken backup successfully in the given location. This Api calculates the latest restorable timestamp by retrieving the latest backup timestamp for each partition of the given container in given location and returns the minimum of all these timestamps. If the data for all its partitions is backed up and there was no new data written to those partitions, then it will return the maximum of current timestamp and the last data backup timestamp.
-If a partition has not taken any backup yet but it has some data to be backed up, then it will return the minimum Unix (epoch) timestamp that is, Jan 1, 1970, midnight UTC (Coordinated Universal Time). In such cases, user must retry until it gives a timestamp greater than epoch timestamp.
+If a partition hasn't taken any backup yet but it has some data to be backed up, then it will return the minimum Unix (epoch) timestamp that is, January 1, 1970, midnight UTC (Coordinated Universal Time). In such cases, user must retry until it gives a timestamp greater than epoch timestamp.
## Latest restorable timestamp calculation
-The following example describes the expected outcome of latest restorable timestamp Api in different scenarios. In each scenario, we will discuss about the current log backup state of partition, pending data to be backed up and how it affects the overall latest restorable timestamp calculation for a container.
+The following example describes the expected outcome of latest restorable timestamp Api in different scenarios. In each scenario, we'll discuss about the current log backup state of partition, pending data to be backed up and how it affects the overall latest restorable timestamp calculation for a container.
-Let's say, we have an account which exists in 2 regions (East US and West US). We have a container "cont1" which has 2 partitions (Partition1 and Partition2). If we send a request to get the latest restorable timestamp for this container at timestamp 't3', the overall latest restorable timestamp for this container will be calculated as follows:
+Let's say, we have an account, which exists in two regions (East US and West US). We have a container "cont1", which has two partitions (Partition1 and Partition2). If we send a request to get the latest restorable timestamp for this container at timestamp 't3', the overall latest restorable timestamp for this container will be calculated as follows:
-##### Case1: Data for all the partitions has not been backed up yet
+##### Case1: Data for all the partitions hasn't been backed up yet
*East US Region:*
Let's say, we have an account which exists in 2 regions (East US and West US). W
* Partition 2: Last backup time = t3, and all its data is backed up. * Latest restorable timestamp = max (current timestamp, t3, t3)
-##### Case3: When one or more partitions has not taken any backup yet
+##### Case3: When one or more partitions hasn't taken any backup yet
*East US Region:*
Yes. This API can be used for account provisioned with continuous backup mode or
The log backup data is backed up every 100 seconds. However, in some exceptional cases, backups could be delayed for more than 100 seconds. #### Will restorable timestamp work for deleted accounts?
-No. It only applies only to live accounts. You can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time.
+No. It applies only to live accounts. You can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Title: Migrate an Azure Cosmos DB account from periodic to continuous backup mode
-description: Azure Cosmos DB currently supports a one-way migration from periodic to continuous mode and itΓÇÖs irreversible. After migrating from periodic to continuous mode, you can leverage the benefits of continuous mode.
+description: Azure Cosmos DB currently supports a one-way migration from periodic to continuous mode and itΓÇÖs irreversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
Last updated 04/08/2022 -+ # Migrate an Azure Cosmos DB account from periodic to continuous backup mode [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can leverage the benefits of continuous mode.
+Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
The following are the key reasons to migrate into continuous mode: * The ability to do self-service restore using Azure portal, CLI, or PowerShell. * The ability to restore at time granularity of the second within the last 30-day window. * The ability to make sure that the backup is consistent across shards or partition key ranges within a period.
-* The ability to restore container, database, or the full account when it is deleted or modified.
+* The ability to restore container, database, or the full account when it's deleted or modified.
* The ability to choose the events on the container, database, or account and decide when to initiate the restore. > [!NOTE]
To perform the migration, you need `Microsoft.DocumentDB/databaseAccounts/write`
## Pricing after migration
-After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost is significantly cheaper than periodic mode. To learn more, see the [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) example.
+After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost is cheaper than periodic mode. To learn more, see the [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) example.
## <a id="portal"></a> Migrate using portal
Use the following steps to migrate your account from periodic backup to continuo
:::image type="content" source="./media/migrate-continuous-backup/enable-backup-migration.png" alt-text="Migrate to continuous mode using Azure portal" lightbox="./media/migrate-continuous-backup/enable-backup-migration.png":::
-1. When the migration is in progress, the status shows **Pending.** After the itΓÇÖs complete, the status changes to **On.** Migration time depends on the size of data in your account.
+1. When the migration is in progress, the status shows **Pending.** After itΓÇÖs complete, the status changes to **On.** Migration time depends on the size of data in your account.
:::image type="content" source="./media/migrate-continuous-backup/migration-status.png" alt-text="Check the status of migration from Azure portal" lightbox="./media/migrate-continuous-backup/migration-status.png":::
Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps
* If you already have Azure CLI installed, use `az upgrade` command to upgrade to the latest version. * Alternatively, user can also use Cloud Shell from Azure portal.
-1. Log in to your Azure account and run the following command to migrate your account to continuous mode:
+1. Sign in to your Azure account and run the following command to migrate your account to continuous mode:
```azurecli-interactive az login
az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplate
## What to expect during and after migration?
-When migrating from periodic mode to continuous mode, you cannot run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
+When migrating from periodic mode to continuous mode, you can't run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1:00 PM PST.
You can restore your account after the migration completes. If the migration com
Yes. #### Which accounts can be targeted for backup migration?
-Currently, SQL API and API for MongoDB accounts with single write region, that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
+Currently, SQL API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
-Accounts enabled with analytical storage and multiple-write regions are not supported for migration.
+Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
#### Does the migration take time? What is the typical time?
-Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with 10s of terabytes of data, the migration can take up to few days to complete.
+Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
#### Does the migration cause any availability impact/downtime?
-No, the migration operation takes place in the background, so the client requests are not impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
+No, the migration operation takes place in the background, so the client requests aren't impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
#### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups? Once the migration process is started, the account will start to become a continuous mode. If the migration fails, you must initiate migration again until it succeeds.
To restore to a time before t1, you can open a support ticket like you normally
#### Which account level control plane operations are blocked during migration? Operations such as add/remove region, failover, changing backup policy, throughput changes resulting in data movement are blocked during migration.
-#### If the migration fails for some underlying issue, would it still block the control plane operation until it is retried and completed successfully?
-Failed migration will not block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
+#### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
+Failed migration won't block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
#### Is it possible to cancel the migration?
-It is not possible to cancel the migration because it is not a reversible operation.
+It isn't possible to cancel the migration because it isn't a reversible operation.
#### Is there a tool that can help estimate migration time based on the data usage and number of regions? There isn't a tool to estimate time. But our scale runs indicate single region with 1 TB of data takes roughly one and half hour.
To learn more about continuous backup mode, see the following articles:
* Restore an account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB API for MongoDB description: Azure CLI Samples for Azure Cosmos DB API for MongoDB-+ Last updated 02/21/2022-++
cosmos-db Connect Mongodb Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-mongodb-account.md
Last updated 08/26/2021-+ adobe-target: true adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021 adobe-target-experience: Experience B
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/consistency-mapping.md
Last updated 10/12/2020-+
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/manage-with-bicep.md
Title: Create and manage MongoDB API for Azure Cosmos DB with Bicep description: Use Bicep to create and configure MongoDB API Azure Cosmos DB API.-+ Last updated 05/23/2022-++ # Manage Azure Cosmos DB MongoDB API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB API for MongoDB description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB API for MongoDB-+ Last updated 08/26/2021-++ # Azure PowerShell samples for Azure Cosmos DB API for MongoDB
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/resource-manager-template-samples.md
Title: Resource Manager templates for Azure Cosmos DB API for MongoDB description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB API for MongoDB. -+ Last updated 05/23/2022-++ # Manage Azure Cosmos DB MongoDB API resources using Azure Resource Manager templates
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/troubleshoot-query-performance.md
Last updated 08/26/2021 -+ # Troubleshoot query issues when using the Azure Cosmos DB API for MongoDB
cosmos-db Tutorial Develop Mongodb React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-mongodb-react.md
ms.devlang: javascript
Last updated 08/26/2021 -+ # Create a MongoDB app with React and Azure Cosmos DB
cosmos-db Tutorial Develop Nodejs Part 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-1.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-2.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Node.js Express app [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-3.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Build the UI with Angular [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Cosmos account
cosmos-db Tutorial Develop Nodejs Part 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-5.md
Last updated 08/26/2021 -+ #Customer intent: As a developer, I want to build a Node.js application, so that I can manage the data stored in Cosmos DB.
cosmos-db Tutorial Develop Nodejs Part 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-6.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Add CRUD functions to the app [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Global Distribution Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-global-distribution-mongodb.md
Last updated 08/26/2021-+ ms.devlang: csharp
cosmos-db Tutorial Query Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-query-mongodb.md
Last updated 12/03/2019-+ # Query data by using Azure Cosmos DB's API for MongoDB
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
Last updated 11/15/2021 -+
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-reads-writes.md
Title: Optimizing the cost of your requests in Azure Cosmos DB description: This article explains how to optimize costs when issuing requests on Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-regions.md
Title: Optimize cost for multi-region deployments in Azure Cosmos DB description: This article explains how to manage costs of multi-region deployments in Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-storage.md
Title: Optimize storage cost in Azure Cosmos DB description: This article explains how to manage storage costs for the data stored in Azure Cosmos DB--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-throughput.md
Title: Optimizing throughput cost in Azure Cosmos DB description: This article explains how to optimize throughput costs for the data stored in Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
Title: Optimizing for development and testing in Azure Cosmos DB description: This article explains how Azure Cosmos DB offers multiple options for development and testing of the service for free.--+++ Last updated 08/26/2021
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Last updated 03/24/2022 -+ # Partitioning and horizontal scaling in Azure Cosmos DB
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB
description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 05/11/2022 --+++
cosmos-db Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy.md
Title: Use Azure Policy to implement governance and controls for Azure Cosmos DB resources description: Learn how to use Azure Policy to implement governance and controls for Azure Cosmos DB resources.--+++ Last updated 09/23/2020
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
Last updated 04/18/2022 -+ ms.devlang: azurecli
cosmos-db Relational Nosql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/relational-nosql.md
Last updated 12/16/2019-+ adobe-target: true
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Title: Request Units as a throughput and performance currency in Azure Cosmos DB description: Learn about how to specify and estimate Request Unit requirements in Azure Cosmos DB--+++ Last updated 03/24/2022 - # Request Units in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md
Title: Prevent Azure Cosmos DB resources from being deleted or changed description: Use Azure Resource Locks to prevent Azure Cosmos DB resources from being deleted or changed. -+ Last updated 05/13/2021-++ ms.devlang: azurecli
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Last updated 04/18/2022 -+
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
Last updated 08/20/2021 -+ # Best practices for scaling provisioned throughput (RU/s)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Title: Azure Cosmos DB Cassandra API keyspace and table with autoscale description: Use Azure CLI to create an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
Title: Create a Cassandra keyspace and table for Azure Cosmos DB description: Create a Cassandra keyspace and table for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
Title: Create resource lock for a Cassandra keyspace and table for Azure Cosmos DB description: Create resource lock for a Cassandra keyspace and table for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
Title: Create a Cassandra serverless account, keyspace and table for Azure Cosmos DB description: Create a Cassandra serverless account, keyspace and table for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources--+++
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
Title: Create an Azure Cosmos account with IP firewall description: Create an Azure Cosmos account with IP firewall--+++ Last updated 02/21/2022
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
Title: Work with account keys and connection strings for an Azure Cosmos account description: Work with account keys and connection strings for an Azure Cosmos account--+++ Last updated 02/21/2022
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
Title: Add regions, change failover priority, trigger failover for an Azure Cosmos account description: Add regions, change failover priority, trigger failover for an Azure Cosmos account--+++ Last updated 02/21/2022
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
Title: Connect an existing Azure Cosmos account with virtual network service endpoints description: Connect an existing Azure Cosmos account with virtual network service endpoints--+++ Last updated 02/21/2022
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
Title: Create an Azure Cosmos account with virtual network service endpoints description: Create an Azure Cosmos account with virtual network service endpoints--+++ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Title: Azure Cosmos DB Gremlin database and graph with autoscale description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
Title: Create a Gremlin database and graph for Azure Cosmos DB description: Create a Gremlin database and graph for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
Title: Create resource lock for a Gremlin database and graph for Azure Cosmos DB description: Create resource lock for a Gremlin database and graph for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Title: Azure Cosmos DB Gremlin serverless account, database, and graph description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin serverless account, database, and graph.--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
Title: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB description: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
Title: Create a database and collection for MongoDB API for Azure Cosmos DB description: Create a database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
Title: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB description: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
Title: Create a serverless database and collection for MongoDB API for Azure Cosmos DB description: Create a serverless database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
Title: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB description: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
Title: Create a Core (SQL) API database and container for Azure Cosmos DB description: Create a Core (SQL) API database and container for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
Title: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container description: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
Title: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB description: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Title: Create a Table API table with autoscale for Azure Cosmos DB description: Create a Table API table with autoscale for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
Title: Create a Table API table for Azure Cosmos DB description: Create a Table API table for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Title: Create resource lock for a Azure Cosmos DB Table API table description: Create resource lock for a Azure Cosmos DB Table API table--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Title: Create a Table API serverless account and table for Azure Cosmos DB description: Create a Table API serverless account and table for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Table API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Table API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table with autoscale description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/create.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Cassandra API resources description: Azure PowerShell script - Azure Cosmos DB list and get operations for Cassandra API-+ Last updated 03/18/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Cassandra API keyspace and table description: Create resource lock for Azure Cosmos Cassandra API keyspace and table--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources-+ Last updated 10/07/2020-++
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
Title: PowerShell script to update the default consistency level on an Azure Cosmos account description: Azure PowerShell script sample - Update default consistency level on an Azure Cosmos DB account using PowerShell-+ Last updated 03/21/2020-++
cosmos-db Failover Priority Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
Title: PowerShell script to change failover priority for an Azure Cosmos account with single write region description: Azure PowerShell script sample - Change failover priority or trigger failover for an Azure Cosmos account with single write region-+ Last updated 03/18/2020-++
cosmos-db Firewall Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/firewall-create.md
Title: PowerShell script to create an Azure Cosmos DB account with IP Firewall description: Azure PowerShell script sample - Create an Azure Cosmos DB account with IP Firewall-+ Last updated 03/18/2020-++
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
Title: PowerShell script to get key and connection string operations for an Azure Cosmos DB account description: Azure PowerShell script sample - Account key and connection string operations for an Azure Cosmos DB account-+ Last updated 03/18/2020-++
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
Title: PowerShell script to update regions for an Azure Cosmos DB account description: Run this Azure PowerShell script to add regions or change region failover order for an Azure Cosmos DB account.-+ Last updated 05/02/2022-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph with autoscale description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/create.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
Title: PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs description: Run this Azure PowerShell script to list all or get specific Azure Cosmos DB Gremlin API databases and graphs.-+ Last updated 05/02/2022-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Gremlin API database and graph description: Create resource lock for Azure Cosmos Gremlin API database and graph--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API description: PowerShell scripts for throughput (RU/s) operations for Gremlin API-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection with autoscale description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/create.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
Title: PowerShell script to list and get operations in Azure Cosmos DB's API for MongoDB description: Azure PowerShell script - Azure Cosmos DB list and get operations for MongoDB API-+ Last updated 05/01/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos MongoDB API database and collection description: Create resource lock for Azure Cosmos MongoDB API database and collection--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/autoscale.md
Title: PowerShell script to create Azure Cosmos DB SQL API database and container with autoscale description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-index-none.md
Title: PowerShell script to create a container with indexing turned off in an Azure Cosmos DB account description: Azure PowerShell script sample - Create a container with indexing turned off in an Azure Cosmos DB account-+ Last updated 05/13/2020-++
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-large-partition-key.md
Title: PowerShell script to create an Azure Cosmos DB container with a large partition key description: Azure PowerShell script sample - Create a container with a large partition key in an Azure Cosmos DB account-+ Last updated 05/13/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create.md
Title: PowerShell script to create Azure Cosmos DB SQL API database and container description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB SQL API resources description: Azure PowerShell script - Azure Cosmos DB list and get operations for SQL API-+ Last updated 03/17/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos SQL API database and container description: Create resource lock for Azure Cosmos SQL API database and container--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Title: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API description: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Title: PowerShell script to create a table in Azure Cosmos DB Table API description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB Table API-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Table API operations description: Azure PowerShell script - Azure Cosmos DB list and get operations for Table API-+ Last updated 07/31/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Table API table description: Create resource lock for Azure Cosmos Table API table--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API description: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API-+ Last updated 10/07/2020-++
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB
description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 05/10/2022 --+++
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
Title: Provision throughput on Azure Cosmos containers and databases description: Learn how to set provisioned throughput for your Azure Cosmos containers and databases.--+++ Last updated 09/16/2021
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
Last updated 04/01/2022 -+
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
> [!VIDEO https://aka.ms/docs.dotnet-best-practices] ## Checklist
-|Checked | Topic |Details/Links |
+|Checked | Subject |Details/Links |
|||| |<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. | | <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
-| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |
| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. | | <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).| |<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. | |<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. | | <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
-| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
-| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you are using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
+| <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture extra diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (that is, if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds). It's advised to only use these diagnostics during performance testing. |
+| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
## Best practices when using Gateway mode Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
For a sample application that's used to evaluate Azure Cosmos DB for high-perfor
To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md). Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bicep Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bicep-samples.md
Title: Bicep samples for Azure Cosmos DB Core (SQL API) description: Use Bicep to create and configure Azure Cosmos DB. -+ Last updated 09/13/2021-++ # Bicep for Azure Cosmos DB
cosmos-db Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-dot-net.md
ms.devlang: csharp Last updated 05/02/2020-+
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
ms.devlang: java Last updated 03/07/2022-+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB | Microsoft Docs description: This article lists several Azure CLI code samples available for interacting with Azure Cosmos DB. View API-specific CLI samples.-+ Last updated 02/21/2022-++ keywords: cosmos db, azure cli samples, azure cli code samples, azure cli script samples
cosmos-db Create Notebook Visualize Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-notebook-visualize-data.md
Last updated 11/05/2019 -+ # Tutorial: Create a notebook in Azure Cosmos DB to analyze and visualize the data
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for SQL API that you can use to connect to and query data in your Azure Cosmos DB account-+ ms.devlang: java Last updated 03/01/2022-++
cosmos-db Create Support Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-support-request-quota-increase.md
Title: How to request quota increase for Azure Cosmos DB resources
description: Learn how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region. -+ Last updated 04/27/2022
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-website.md
Title: Deploy a web app with a template - Azure Cosmos DB description: Learn how to deploy an Azure Cosmos account, Azure App Service Web Apps, and a sample web application using an Azure Resource Manager template.-+ Last updated 06/19/2020-++ # Deploy Azure Cosmos DB and Azure App Service with a web app from GitHub using an Azure Resource Manager Template
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/database-transactions-optimistic-concurrency.md
Title: Database transactions and optimistic concurrency control in Azure Cosmos DB description: This article describes database transactions and optimistic concurrency control in Azure Cosmos DB--+++ Last updated 12/04/2019- # Transactions and optimistic concurrency control
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-container.md
Title: Create a container in Azure Cosmos DB SQL API description: Learn how to create a container in Azure Cosmos DB SQL API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+ Last updated 01/03/2022-++ ms.devlang: csharp
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
Title: Manage consistency in Azure Cosmos DB description: Learn how to configure and manage consistency levels in Azure Cosmos DB using Azure portal, .NET SDK, Java SDK and various other SDKs-+ Last updated 02/16/2022-++ ms.devlang: csharp, java, javascript
cosmos-db How To Multi Master https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-multi-master.md
Title: How to configure multi-region writes in Azure Cosmos DB description: Learn how to configure multi-region writes for your applications by using different SDKs in Azure Cosmos DB.-+ Last updated 01/06/2021-++
cosmos-db How To Provision Container Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-container-throughput.md
Title: Provision container throughput in Azure Cosmos DB SQL API description: Learn how to provision throughput at the container level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs. -+ Last updated 10/14/2020-++
cosmos-db How To Provision Database Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-database-throughput.md
Title: Provision database throughput in Azure Cosmos DB SQL API description: Learn how to provision throughput at the database level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs. -+ Last updated 10/15/2020-++
cosmos-db How To Query Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-query-container.md
Title: Query containers in Azure Cosmos DB description: Learn how to query containers in Azure Cosmos DB using in-partition and cross-partition queries-+ Last updated 3/18/2019-++ # Query an Azure Cosmos container
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB with Bicep description: Use Bicep to create and configure Azure Cosmos DB for Core (SQL) API -+ Last updated 02/18/2022-++ # Manage Azure Cosmos DB Core (SQL) API resources with Bicep
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-cli.md
Title: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI description: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI. -+ Last updated 02/18/2022-++ # Manage Azure Cosmos Core (SQL) API resources using Azure CLI
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-powershell.md
Title: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell description: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell. -+ Last updated 02/18/2022-++
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-templates.md
Title: Create and manage Azure Cosmos DB with Resource Manager templates description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Core (SQL) API -+ Last updated 02/18/2022-++ # Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates
cosmos-db Migrate Containers Partitioned To Nonpartitioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-containers-partitioned-to-nonpartitioned.md
Title: Migrate non-partitioned Azure Cosmos containers to partitioned containers description: Learn how to migrate all the existing non-partitioned containers into partitioned containers.-+ Last updated 08/26/2021-++
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-data-striim.md
Last updated 12/09/2021-+ # Migrate data to Azure Cosmos DB SQL API account using Striim
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-hbase-to-cosmos-db.md
Last updated 12/07/2021 -+ # Migrate data from Apache HBase to Azure Cosmos DB SQL API account
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Title: Modeling data in Azure Cosmos DB description: Learn about data modeling in NoSQL databases, differences between modeling data in a relational database and a document database.--+++ Last updated 03/24/2022 - # Data modeling in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
For comparison, let's first see how we might model data in a relational database
:::image type="content" source="./media/sql-api-modeling-data/relational-data-model.png" alt-text="Relational database model" border="false":::
-When working with relational databases, the strategy is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, as well as multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
+The strategy, when working with relational databases, is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, and multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
The guiding premise when normalizing data is to **avoid storing redundant data** on each record and rather refer to data. In this example, to read a person, with all their contact details and addresses, you need to use JOINS to effectively compose back (or denormalize) your data at run time.
JOIN ContactDetailType cdt ON cdt.Id = cd.TypeId
JOIN Address a ON a.PersonId = p.Id ```
-Updating a single person with their contact details and addresses requires write operations across many individual tables.
+Write operations across many individual tables are required to update a single person's contact details and addresses.
Now let's take a look at how we would model the same data as a self-contained entity in Azure Cosmos DB.
Now let's take a look at how we would model the same data as a self-contained en
Using the approach above we've **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document. In addition, because we're not confined to a fixed schema we have the flexibility to do things like having contact details of different shapes entirely.
-Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating a person record, with their contact details and addresses, is also a **single write operation** against a single item.
+Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating the contact details and addresses of a person record is also a **single write operation** against a single item.
By denormalizing data, your application may need to issue fewer queries and updates to complete common operations.
In general, use embedded data models when:
* There are **contained** relationships between entities. * There are **one-to-few** relationships between entities. * There's embedded data that **changes infrequently**.
-* There's embedded data that will not grow **without bound**.
+* There's embedded data that won't grow **without bound**.
* There's embedded data that is **queried frequently together**. > [!NOTE]
Take this JSON snippet.
This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there's no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large so is a design you should avoid.
-As the size of the item grows the ability to transmit the data over the wire as well as reading and updating the item, at scale, will be impacted.
+As the size of the item grows the ability to transmit the data over the wire and reading and updating the item, at scale, will be impacted.
In this case, it would be better to consider the following data model.
Comment items:
] ```
-This model has a document for each comment with a property that contains the post id. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
-than the most recent comments would query this container passing the postId which should be the partition key for the comments container.
+This model has a document for each comment with a property that contains the post identifier. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
+than the most recent comments would query this container passing the postId, which should be the partition key for the comments container.
Another case where embedding data isn't a good idea is when the embedded data is used often across items and will change frequently.
Take this JSON snippet.
"holdings": [ { "numberHeld": 100,
- "stock": { "symbol": "zaza", "open": 1, "high": 2, "low": 0.5 }
+ "stock": { "symbol": "zbzb", "open": 1, "high": 2, "low": 0.5 }
}, { "numberHeld": 50,
Take this JSON snippet.
This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you're constantly updating each portfolio document every time a stock is traded.
-Stock *zaza* may be traded many hundreds of times in a single day and thousands of users could have *zaza* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
+Stock *zbzb* may be traded many hundreds of times in a single day and thousands of users could have *zbzb* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
## <a id="referencing-data"></a>Reference data
Person document:
Stock documents: { "id": "1",
- "symbol": "zaza",
+ "symbol": "zbzb",
"open": 1, "high": 2, "low": 0.5,
An immediate downside to this approach though is if your application is required
### What about foreign keys?
-Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or through the use of server-side triggers or stored procedures on Azure Cosmos DB.
+Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or by using server-side triggers or stored procedures on Azure Cosmos DB.
### When to reference
Book documents:
{"id": "1000","name": "Deep Dive into Azure Cosmos DB", "pub-id": "mspress"} ```
-In the above example, we have dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
+In the above example, we've dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
-### How do I model many to many relationships?
+### How do I model many-to-many relationships?
-In a relational database *many:many* relationships are often modeled with join tables, which just join records from other tables together.
+In a relational database *many-to-many* relationships are often modeled with join tables, which just join records from other tables together.
:::image type="content" source="./media/sql-api-modeling-data/join-table.png" alt-text="Join tables" border="false":::
Here we've (mostly) followed the embedded model, where data from other entities
If you look at the book document, we can see a few interesting fields when we look at the array of authors. There's an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
-Sure, if the author's name changed or they wanted to update their photo we'd have to go and update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
+Sure, if the author's name changed or they wanted to update their photo we'd have to update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
In the example, there are **pre-calculated aggregates** values to save expensive processing on a read operation. In the example, some of the data embedded in the author document is data that is calculated at run-time. Every time a new book is published, a book document is created **and** the countOfBooks field is set to a calculated value based on the number of book documents that exist for a particular author. This optimization would be good in read heavy systems where we can afford to do computations on writes in order to optimize reads.
-The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures, that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
+The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
## Distinguish between different document types
Review documents:
This integration happens through [Azure Cosmos DB analytical store](../analytical-store-introduction.md), a columnar representation of your transactional data that enables large-scale analytics without any impact to your transactional workloads. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads. When you create a container with analytical store enabled, or when you enable analytical store on an existing container, all transactional inserts, updates, and deletes are synchronized with analytical store in near real time, no Change Feed or ETL jobs are required.
-With Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (RUs) costs. Azure Synapse Analytics currently supports Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
+With Azure Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (request units) costs. Azure Synapse Analytics currently supports Azure Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
### Analytical store automatic schema inference
Normalization becomes meaningless since with Azure Synapse Link you can join bet
* Fewer properties per document. * Data structures with fewer nested levels.
-Please note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
+Note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
Another important factor for normalization is that SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. In other words, both analytical store and Synapse SQL serverless pools have a limit of 1000 properties.
But what to do since denormalization is an important data modeling technique for
Your Azure Cosmos DB partition key (PK) isn't used in analytical store. And now you can use [analytical store custom partitioning](https://devblogs.microsoft.com/cosmosdb/custom-partitioning-azure-synapse-link/) to copies of analytical store using any PK that you want. Because of this isolation, you can choose a PK for your transactional data with focus on data ingestion and point reads, while cross-partition queries can be done with Azure Synapse Link. Let's see an example:
-In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in RUs to run. But with Azure Synapse Link, you can run these analytical queries at no RUs costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link leverages this characteristic to allow great performance with Azure Synapse Analytics runtimes.
+In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in request units to run. But with Azure Synapse Link, you can run these analytical queries at no request units costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link applies this characteristic to allow great performance with Azure Synapse Analytics runtimes.
### Data types and properties names
Azure Synapse Link allows you to reduce costs from the following perspectives:
* Fewer queries running in your transactional database. * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits. * Data tiering since [analytical time-to-live (attl)](../analytical-store-introduction.md#analytical-ttl) is independent from transactional time-to-live (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. For more information about the current backup limitations, see [analytical store overview](../analytical-store-introduction.md).
- * No ETL jobs running in your environment, meaning that you don't need to provision RUs for them.
+ * No ETL jobs running in your environment, meaning that you don't need to provision request units for them.
### Controlled redundancy
-This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can leverage [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for a Synapse Link friendly data model. Let's see an example:
+This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can use [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for an Azure Synapse Link friendly data model. Let's see an example:
#### Scenario Container `CustomersOrdersAndItems` is used to store on-line orders including customer and items details: billing address, delivery address, delivery method, delivery status, items price, etc. Only the first 1000 properties are represented and key information isn't included in analytical store, blocking Azure Synapse Link usage. The container has PBs of records it's not possible to change the application and remodel the data.
-Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase RUs provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
+Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase request units provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
What to do? #### Solution with Change Feed
-* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they are normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
-* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the RUs usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
-* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another RUs usage reduction, since there's a minimum of 10 RUs per GB in Azure Cosmos DB. Less data, fewer RUs.
+* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they're normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
+* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the request units usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
+* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another request units usage reduction, since there's a minimum of 10 request units per GB in Azure Cosmos DB. Less data, fewer request units.
## Takeaways The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
-Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can leverage the flexibility of a schema-free database to embrace that change and evolve your data model easily.
+Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
## Next steps
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md
Title: Connect to Azure Cosmos DB using BI analytics tools description: Learn how to use the Azure Cosmos DB ODBC driver to create tables and views so that normalized data can be viewed in BI and data analytics software.--+++
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Core (SQL) API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Core (SQL) API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-template.md
Title: Quickstart - Create an Azure Cosmos DB and a container by using Azure Resource Manager template description: Quickstart showing how to an Azure Cosmos database and a container by using Azure Resource Manager template--+++ tags: azure-resource-manager
cosmos-db Scale On Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/scale-on-schedule.md
Title: Scale Azure Cosmos DB on a schedule by using Azure Functions timer description: Learn how to scale changes in throughput in Azure Cosmos DB using PowerShell and Azure Functions.-+ Last updated 01/13/2020-++ # Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/serverless-computing-database.md
-+ Last updated 05/02/2020
cosmos-db Sql Query Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-scalar-expressions.md
Title: Scalar expressions in Azure Cosmos DB SQL queries description: Learn about the scalar expression SQL syntax for Azure Cosmos DB. This article also describes how to combine scalar expressions into complex expressions by using operators. -+ Last updated 05/17/2019-++ # Scalar expressions in Azure Cosmos DB SQL queries
cosmos-db Synthetic Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/synthetic-partition-keys.md
Last updated 08/26/2021--+++
cosmos-db Templates Samples Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/templates-samples-sql.md
Title: Azure Resource Manager templates for Azure Cosmos DB Core (SQL API) description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB. -+ Last updated 08/26/2021-++ # Azure Resource Manager templates for Azure Cosmos DB
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/time-to-live.md
Title: Expire data in Azure Cosmos DB with Time to Live description: With TTL, Microsoft Azure Cosmos DB provides the ability to have documents automatically purged from the system after a period of time.--+++ Last updated 09/16/2021-- # Time to Live (TTL) in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Troubleshoot Bad Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-bad-request.md
Last updated 03/07/2022 -+ # Diagnose and troubleshoot bad request exceptions in Azure Cosmos DB
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
Last updated 04/14/2022 -+ # Diagnose and troubleshoot issues when using Azure Functions trigger for Cosmos DB
cosmos-db Troubleshoot Dot Net Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-header-too-large.md
Last updated 09/29/2021 -+
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
Last updated 02/02/2022 -+
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Last updated 03/09/2022 -+ # Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-forbidden.md
Last updated 04/14/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB forbidden exceptions
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md
Last updated 05/26/2021 -+
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
Last updated 03/03/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
cosmos-db Troubleshoot Request Timeout Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout-java-sdk-v4-sql.md
Last updated 10/28/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK request timeout exceptions
cosmos-db Troubleshoot Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout.md
Last updated 07/13/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB request timeout exceptions
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md
Last updated 03/28/2022
-+ # Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multiregional environments [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Troubleshoot Service Unavailable Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable-java-sdk-v4-sql.md
Last updated 02/03/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK service unavailable exceptions
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable.md
Last updated 08/06/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB service unavailable exceptions
cosmos-db Troubleshoot Unauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-unauthorized.md
Last updated 07/13/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB unauthorized exceptions
cosmos-db Tutorial Global Distribution Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-global-distribution-sql-api.md
Title: 'Tutorial: Azure Cosmos DB global distribution tutorial for the SQL API' description: 'Tutorial: Learn how to set up Azure Cosmos DB global distribution using the SQL API with .NET, Java, Python and various other SDKs'--+++ Last updated 04/03/2022- - # Tutorial: Set up Azure Cosmos DB global distribution using the SQL API [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Tutorial Query Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-query-sql-api.md
Title: 'Tutorial: How to query with SQL in Azure Cosmos DB?' description: 'Tutorial: Learn how to query with SQL queries in Azure Cosmos DB using the query playground'--+++ Last updated 08/26/2021- # Tutorial: Query Azure Cosmos DB by using the SQL API
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Last updated 03/25/2022-+ ms.devlang: csharp
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Last updated 07/12/2021-+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Table API description: Azure CLI Samples for Azure Cosmos DB Table API-+ Last updated 02/21/2022-++
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
### [Azure CLI](#tab/azure-cli)
-Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az_cosmosdb_create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
+Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
In the [Azure portal](https://portal.azure.com/), complete the following steps t
### [Azure CLI](#tab/azure-cli)
-Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az_cosmosdb_table_create) command.
+Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
```azurecli COSMOS_TABLE_NAME='WeatherData'
To access your table(s) in Cosmos DB, your app will need the table connection st
### [Azure CLI](#tab/azure-cli)
-To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
```azurecli # This gets the primary Table connection string
A resource group can be deleted using the [Azure portal](https://portal.azure.co
### [Azure CLI](#tab/azure-cli)
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az_group_delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
```azurecli az group delete --name $RESOURCE_GROUP_NAME
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
Title: Create a container in Azure Cosmos DB Table API description: Learn how to create a container in Azure Cosmos DB Table API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+ Last updated 10/16/2020-++ # Create a container in Azure Cosmos DB Table API
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
ms.devlang: python
Last updated 03/23/2021 -+
cosmos-db How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-ruby.md
Last updated 07/23/2020 -+ # How to use Azure Table Storage and the Azure Cosmos DB Table API with Ruby [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Table API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Table API. -+ Last updated 09/13/2021-++ # Manage Azure Cosmos DB Table API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Table API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Table API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Table API
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
Title: Resource Manager templates for Azure Cosmos DB Table API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Table API. -+ Last updated 05/19/2020-++ # Manage Azure Cosmos DB Table API resources using Azure Resource Manager templates
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-import.md
Title: Migrate existing data to a Table API account in Azure Cosmos DB description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.--+++
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-support.md
Last updated 11/03/2021 -+ ms.devlang: cpp, csharp, java, javascript, php, python, ruby
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution-table.md
Last updated 01/30/2020-+ # Set up Azure Cosmos DB global distribution using the Table API [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
cosmos-db Tutorial Query Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query-table.md
Last updated 06/05/2020-+ ms.devlang: csharp
cosmos-db Total Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/total-cost-ownership.md
Title: Total Cost of Ownership (TCO) with Azure Cosmos DB description: This article compares the total cost of ownership of Azure Cosmos DB with IaaS and on-premises databases--+++ Last updated 08/26/2021- # Total Cost of Ownership (TCO) with Azure Cosmos DB
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-setup-ci-cd.md
Last updated 01/28/2020 -+ # Set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps
cosmos-db Understand Your Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/understand-your-bill.md
Title: Understanding your Azure Cosmos DB bill description: This article explains how to understand your Azure Cosmos DB bill with some examples.--+++ Last updated 03/31/2022- # Understand your Azure Cosmos DB bill
cosmos-db Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/unique-keys.md
Title: Use unique keys in Azure Cosmos DB description: Learn how to define and use unique keys for an Azure Cosmos database. This article also describes how unique keys add a layer of data integrity.--+++ Last updated 08/26/2021- # Unique key constraints in Azure Cosmos DB
cosmos-db Update Backup Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/update-backup-storage-redundancy.md
Last updated 12/03/2021 -+
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
Title: Common use cases and scenarios for Azure Cosmos DB description: 'Learn about the top five use cases for Azure Cosmos DB: user generated content, event logging, catalog data, user preferences data, and Internet of Things (IoT).' --+++ Last updated 05/21/2019
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
Title: Monitor and debug with insights in Azure Cosmos DB
description: Use metrics in Azure Cosmos DB to debug common issues and monitor the database. -+
cosmos-db Visualize Qlik Sense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/visualize-qlik-sense.md
Last updated 05/23/2019-+ # Connect Qlik Sense to Azure Cosmos DB and visualize your data
cosmos-db Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/whitepapers.md
Title: Whitepapers that describe Azure Cosmos DB concepts
description: Get the list of whitepapers for Azure Cosmos DB, these whitepapers describe the concepts in depth. --+++ Last updated 05/07/2021
cost-management-billing Understand Azure Data Explorer Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-data-explorer-reservation-charges.md
Title: Understand how the reservation discount is applied to Azure Data Explorer
+ Title: Reservation discount for Azure Data Explorer
description: Learn how the reservation discount is applied to Azure Data Explorer markup meter. Previously updated : 09/15/2021 Last updated : 05/31/2022+
-# Understand how the reservation discount is applied to Azure Data Explorer
+# How the reservation discount is applied to Azure Data Explorer
After you buy an Azure Data Explorer reserved capacity, the reservation discount is automatically applied to Azure Data Explorer resources that match the attributes and quantity of the reservation. A reservation includes the Azure Data Explorer markup charges. It doesn't include compute, networking, storage, or any other Azure resource used to operate Azure Data Explorer cluster. Reservations for these resources should be bought separately.
-## How reservation discount is applied
+## Reservation discount usage
-A reservation discount is on a "*use-it-or-lose-it*" basis. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+A reservation discount is on a "*use-it-or-lose-it*" basis. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward discounts for unused reserved hours.
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
-## Reservation discount applied to Azure Data Explorer clusters
+## Discount for other resources
A reservation discount is applied to Azure Data Explorer markup consumption on an hour-by-hour basis. For Azure Data Explorer resources that don't run the full hour, the reservation discount is automatically applied to other Data Explorer resources that match the reservation attributes. The discount can apply to Azure Data Explorer resources that are running concurrently. If you don't have Azure Data Explorer resources that run for the full hour and that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
If you have questions or need help, [create a support request](https://go.micros
To learn more about Azure reservations, see the following articles: * [Prepay for Azure Data Explorer compute resources with Azure Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity)
-* [What are reservations for Azure](save-compute-costs-reservations.md)
+* [What are reservations for Azure?](save-compute-costs-reservations.md)
* [Manage Azure reservations](manage-reserved-vm-instance.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+* [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
* [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) * [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
databox-online Azure Stack Edge Gpu 2205 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md
+
+ Title: Azure Stack Edge 2205 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2205 release.
++
+
+++ Last updated : 06/06/2022+++
+# Azure Stack Edge 2205 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2205 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2205** release, which maps to software version number **2.2.1981.5086**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+
+## What's new
+
+The 2205 release has the following features and enhancements:
+
+- **Kubernetes changes** - Beginning this release, compute enablement is moved to a dedicated Kubernetes page in the local UI.
+- **Generation 2 virtual machines** - Starting this release, Generation 2 virtual machines can be deployed on Azure Stack Edge. For more information, see [Supported VM sizes and types](azure-stack-edge-gpu-virtual-machine-overview.md#operating-system-disks-and-images).
+- **GPU extension update** - In this release, the GPU extension packages are updated. These updates will fix some issues that were encountered in a previous release during the installation of the extension. For more information, see how to [Update GPU extension of your Azure Stack Edge](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
+- **No IP option** - Going forward, there's an option to not set an IP for a network interface on your Azure Stack Edge device. For more information, see [Configure network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-network).
++
+## Issues fixed in 2205 release
+
+The following table lists the issues that were release noted in previous releases and fixed in the current release.
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|GPU Extension installation | In the previous releases, there were issues that caused the GPU extension installation to fail. These issues are described in [Troubleshooting GPU extension issues](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md). These are fixed in the 2205 release and both the Windows and Linux installation packages are updated. More information on 2205 specific installation changes is covered in [Install GPU extension on your Azure Stack Edge device](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md). |
+
+## Known issues in 2205 release
+
+The following table provides a summary of known issues in this release.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
+|**2.**|HPN VMs |For this release, the Standard_F12_HPN can only support one network interface and can't be used for Multi-Access Edge Computing (MEC) deployments. | |
++
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> - In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> - Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> - Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> - Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.</li></ul> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle wasn't managed in the update flow. Hence, when updating to 2103 release, GPU VMs aren't stopped automatically during the update. You'll need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**26.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
++
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 07/16/2021 Last updated : 05/19/2022 #Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
Do the following steps to create a Windows VM image:
1. Create a Windows virtual machine in Azure. For portal instructions, see [Create a Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md). For PowerShell instructions, see [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md).
- The virtual machine must be a Generation 1 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+ The virtual machine can be a Generation 1 or Generation 2 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
- You can use any Windows Gen1 VM with a fixed-size VHD in Azure Marketplace. For a list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
+ You can use any Windows Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace. For a list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
2. Generalize the virtual machine. To generalize the VM, [connect to the virtual machine](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-windows-vm), open a command prompt, and run the following `sysprep` command:
Do the following steps to create a Linux VM image:
1. Create a Linux virtual machine in Azure. For portal instructions, see [Quickstart: Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md). For PowerShell instructions, see [Quickstart: Create a Linux VM in Azure with PowerShell](../virtual-machines/linux/quick-create-powershell.md).
- You can use any Gen1 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images, with the exception of Red Hat Enterprise Linux (RHEL) images, which require extra steps. For a list of Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
+ You can use any Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images. This excludes Red Hat Enterprise Linux (RHEL) images, which require extra steps and can only be used to create a Gen1 VM image. For a list of Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
1. Deprovision the VM. Use the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see [Understanding and using Azure Linux Agent](../virtual-machines/extensions/agent-linux.md).
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Previously updated : 06/14/2021 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuSer
PS /home/user> ```
->[!IMPORTANT]
-> Use only the Gen 1 images. Any images specified as Gen 2 (usually the sku has a "-g2" suffix), do not work on Azure Stack Edge.
- In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥). :::image type="content" source="media/azure-stack-edge-create-virtual-machine-marketplace-image/marketplace-image-1.png" alt-text="List of marketplace images":::
databox-online Azure Stack Edge Gpu Create Virtual Switch Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-switch-powershell.md
Before you begin, make sure that:
The client machine should be running a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). -- Use the local UI to enable compute on one of the physical network interfaces on your device as per the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) on your device.
+- Use the local UI to enable compute on one of the physical network interfaces on your device as per the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) on your device.
## Connect to the PowerShell interface
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
In this tutorial, you learn how to:
Before you set up a compute role on your Azure Stack Edge Pro device: - Make sure that you've activated your Azure Stack Edge Pro device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).-- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and:
+- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) and:
- Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 04/06/2022 Last updated : 05/24/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
![Screenshot of local web UI "Network" tile for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-1.png)
- On your physical device, there are six network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3, PORT 4, PORT 5, and PORT 6 are all 25-Gbps network interfaces that can also serve as 10-Gbps network interfaces. PORT 1 is automatically configured as a management-only port, and PORT 2 to PORT 6 are all data ports. For a new device, the **Network settings** page is as shown below.
+ On your physical device, there are six network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3, PORT 4, PORT 5, and PORT 6 are all 25-Gbps network interfaces that can also serve as 10-Gbps network interfaces. PORT 1 is automatically configured as a management-only port, and PORT 2 to PORT 6 are all data ports. For a new device, the **Network** page is as shown below.
![Screenshot of local web UI "Network" page for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-2a.png)
Follow these steps to configure the network for your device.
![Screenshot of local web UI "Port 3 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-4.png)
+ - By default for all the ports, it is expected that you'll set an IP. If you decide not to set an IP for a network interface on your device, you can set the IP to **No** and then **Modify** the settings.
+
+ ![Screenshot of local web UI "Port 2 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/set-ip-no.png)
+ As you configure the network settings, keep in mind: * Make sure that Port 5 and Port 6 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md).
Follow these steps to configure the network for your device.
* If DHCP isn't enabled, you can assign static IPs if needed. * You can configure your network interface as IPv4. * Serial number for any port corresponds to the node serial number. <!--* On 25-Gbps interfaces, you can set the RDMA (Remote Direct Access Memory) mode to iWarp or RoCE (RDMA over Converged Ethernet). Where low latencies are the primary requirement and scalability is not a concern, use RoCE. When latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.-->
- <!--* Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge. <!--NIC teaming should work for 2-node -->
+
> [!NOTE] > If you need to connect to your device from an outside network, see [Enable device access from outside network](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-from-outside-network) for additional network settings.
Follow these steps to configure the network for your device.
After you have configured and applied the network settings, select **Next: Advanced networking** to configure compute network.
-## Configure virtual switches and compute IPs
+## Configure virtual switches
-Follow these steps to enable compute on a virtual switch and configure virtual networks.
+Follow these steps to add or delete virtual switches and virtual networks.
1. In the local UI, go to **Advanced networking** page.
-1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. Select **Add virtual switch** to create a new switch.
+1. In the **Virtual switch** section, you'll add or delete virtual switches. Select **Add virtual switch** to create a new switch.
![Screenshot of "Advanced networking" page in local UI for one node with Add virtual switch selected.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png)
Follow these steps to enable compute on a virtual switch and configure virtual n
1. Provide a name for your virtual switch. 1. Choose the network interface on which the virtual switch should be created. 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
-
- > [!TIP]
- > Use *CTRL + Click* to select more than one intent for your virtual switch.
-
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
-
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses.
-
- > [!IMPORTANT]
- > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
- > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
-
-1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
-
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-
-1. Select **Apply**.
-
- ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
-
-1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
+ 1. Select **Apply**. You can see that the specified virtual switch is created.
![Screenshot of "Advanced networking" page with virtual switch added and enabled for compute in local UI for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
+1. You can create more than one switch by following the steps described earlier.
+1. To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
-To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+You can now create virtual networks and associate with the virtual switches you created.
-> [!IMPORTANT]
-> Only one virtual switch can be assigned for compute.
-### Configure virtual network
+## Configure virtual networks
You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
You can add or delete virtual networks associated with your virtual switches. To
1. Provide a **Name** for your virtual network. 1. Enter a **VLAN ID** as a unique number in 1-4094 range. The VLAN ID that you provide should be in your trunk configuration. For more information on trunk configuration for your switch, refer to the instructions from your physical switch manufacturer. 1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration.
- 1. Select **Apply**.
+ 1. Select **Apply**. A virtual network is created on the specified virtual switch.
![Screenshot of how to add virtual network in "Advanced networking" page in local UI for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-virtual-network-one-node-1.png)
-To delete a virtual network, under the **Virtual network** section, select **Delete virtual network**.
+1. To delete a virtual network, under the **Virtual network** section, select **Delete virtual network** and select the virtual network you want to delete.
+
+1. Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
++
+## Configure compute IPs
+
+Follow these steps to configure compute IPs for your Kubernetes workloads.
+
+1. In the local UI, go to the **Kubernetes** page.
+
+1. From the dropdown select a virtual switch that you will use for Kubernetes compute traffic. <!--By default, all switches are configured for management. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.-->
+
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+ > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+1. Select **Apply**.
+
+ ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
-Select **Next: Web proxy** to configure web proxy.
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
+
+1. Select **Next: Web proxy** to configure web proxy.
::: zone-end
To configure the network for a 2-node device, follow these steps on the first no
![Local web UI "Advanced networking" page for a new device 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-settings-1m.png)
+ By default for all the ports, it is expected that you'll set an IP. If you decide not to set an IP for a network interface on your device, you can set the IP to **No** and then **Modify** the settings.
+
+ ![Screenshot of local web UI "Port 2 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/set-ip-no.png)
+ As you configure the network settings, keep in mind: * Make sure that Port 5 and Port 6 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md).
For clients connecting via NFS protocol to the two-node device, follow these ste
> [!NOTE] > Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
-### Configure virtual switches and compute IPs
+### Configure virtual switches
-After the cluster is formed and configured, you'll now create new virtual switches or assign intent to the existing virtual switches that are created based on the selected network topology.
+After the cluster is formed and configured, you can now create new virtual switches.
> [!IMPORTANT] > On a two-node cluster, compute should only be configured on a virtual switch. 1. In the local UI, go to **Advanced networking** page.
-1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. You can select an existing virtual switch or select **Add virtual switch** to create a new switch.
+1. In the **Virtual switch** section, add or delete virtual switches. Select **Add virtual switch** to create a new switch.
![Configure compute page in Advanced networking in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png) 1. In the **Network settings** blade, if using a new switch, provide the following:
- 1. Provide a name for your virtual switch.
+ 1. Provide a name for your virtual switch.
1. Choose the network interface on which the virtual switch should be created. 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
-
- > [!TIP]
- > Use *CTRL + Click* to select more than one intent for your virtual switch.
-
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
-
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses. For a two-node cluster, provide a minimum of 3 contiguous IPv4 addresses.
-
- > [!IMPORTANT]
- > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
-
-1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
-
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-
-1. Select **Apply**.
+ 1. Select **Apply**.
- ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+1. The configuration will take a couple minutes to apply and once the virtual switch is created, the list of virtual switches updates to reflect the newly created switch. You can see that the specified virtual switch is created and enabled for compute.
-1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
-
![Configure compute page in Advanced networking in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
+1. You can create more than one switch by following the steps described earlier.
-To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+1. To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
-> [!IMPORTANT]
-> Only one virtual switch can be assigned for compute.
+You can next create and associate virtual networks with your virtual switches.
### Configure virtual network
-You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
+You can add or delete virtual networks associated with your virtual switches. To add a virtual network, follow these steps:
1. In the local UI on the **Advanced networking** page, under the **Virtual network** section, select **Add virtual network**. 1. In the **Add virtual network** blade, input the following information:
You can add or delete virtual networks associated with your virtual switches. To
1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration. 1. Select **Apply**.
+ ![UPDATE THIS screen - Screenshot of how to add virtual network in "Advanced networking" page in local UI for two node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-virtual-network-one-node-1.png)
+
+1. To delete a virtual network, under the **Virtual network** section, select **Delete virtual network** and select the virtual network you want to delete.
+
+Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
+++
+## Configure compute IPs
+
+After the virtual switches are created, you can enable these switches for Kubernetes compute traffic.
+
+1. In the local UI, go to the **Kubernetes** page.
+1. From the dropdown list, select the virtual switch you want to enable for Kubernetes compute traffic.
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses. For a two-node cluster, provide a minimum of 3 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+1. Select **Apply**.
+
+ ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
-To delete a virtual network, under the **Virtual network** section, select **Delete virtual network**.
::: zone-end
This is an optional configuration. Although web proxy configuration is optional,
2. To validate and apply the configured web proxy settings, select **Apply**.
- ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-web-proxy-1.png)<!--UI text update for instruction text is needed.-->
+ ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-web-proxy-1.png).
1. After the settings are applied, select **Next: Device**.
databox-online Azure Stack Edge Gpu Deploy Gpu Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
Previously updated : 08/03/2021 Last updated : 05/26/2022 #Customer intent: As an IT admin, I want the flexibility to deploy a single GPU virtual machine (VM) quickly in the portal or use templates to deploy and manage multiple GPU VMs efficiently on my Azure Stack Edge Pro GPU device.
Use the Azure portal to quickly deploy a single GPU VM. You can install the GPU
You can deploy a GPU VM via the portal or using Azure Resource Manager templates.
-For a list of supported operating systems, drivers, and VM sizes for GPU VMs, see [What are GPU virtual machines?](azure-stack-edge-gpu-overview-gpu-virtual-machines.md). For deployment considerations, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+For a list of supported operating systems, drivers, and VM sizes for GPU VMs, see [What are GPU virtual machines?](azure-stack-edge-gpu-overview-gpu-virtual-machines.md) For deployment considerations, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
> [!IMPORTANT]
-> If your device will be running Kubernetes, do not configure Kubernetes before you deploy your GPU VMs. If you configure Kubernetes first, it claims all the available GPU resources, and GPU VM creation will fail. For Kubernetes deployment considerations on 1-GPU and 2-GPU devices, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+> - Gen2 VMs are not supported for GPU.
+> - If your device will be running Kubernetes, do not configure Kubernetes before you deploy your GPU VMs. If you configure Kubernetes first, it claims all the available GPU resources, and GPU VM creation will fail. For Kubernetes deployment considerations on 1-GPU and 2-GPU devices, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+> - If you're running a Windows 2016 VHD, you must enable TLS 1.2 inside the VM before you install the GPU extension on 2205 and higher. For detailed steps, see [Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md#failure-to-install-gpu-extension-on-a-windows-2016-vhd).
### [Portal](#tab/portal)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
Previously updated : 09/29/2021 Last updated : 05/19/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
In addition to the above prerequisites that are used for VM creation, you'll als
Follow these steps to create an HPN VM on your device.
-1. In the Azure portal of your Azure Stack Edge resource, [Add a VM image](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm-image). You'll use this VM image to create a VM in the next step.
+1. In the Azure portal of your Azure Stack Edge resource, [Add a VM image](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm-image). You'll use this VM image to create a VM in the next step. You can choose either Gen1 or Gen2 for the VM.
1. Follow all the steps in [Add a VM](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm) with this configuration requirement.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Previously updated : 08/02/2021 Last updated : 05/26/2022 #Customer intent: As an IT admin, I need to understand how install GPU extension on GPU virtual machines (VMs) on my Azure Stack Edge Pro GPU device.
This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for installing a GPU extension using Azure Resource Manager templates on both Windows and Linux VMs. > [!NOTE]
-> In the Azure portal, you can install a GPU extension during VM creation or after the VM is deployed. For steps and requirements, see [Deploy GPU virtual machines](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
-
+> - In the Azure portal, you can install a GPU extension during VM creation or after the VM is deployed. For steps and requirements, see [Deploy GPU virtual machines](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+> - If you're running a Windows 2016 VHD, you must enable TLS 1.2 inside the VM before you install the GPU extension on 2205 and higher. For detailed steps, see [Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md#failure-to-install-gpu-extension-on-a-windows-2016-vhd).
## Prerequisites
Before you install GPU extension on the GPU VMs running on your device, make sur
- Make sure that the port enabled for compute network on your device is connected to Internet and has access. The GPU drivers are downloaded through the internet access.
- Here is an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes is not deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
+ Here's an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes isn't deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
![Screenshot of the Compute pane for an Azure Stack Edge device. Compute settings for Port 2 are highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension/enable-compute-network-1.png) 1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
-1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You'll need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
## Edit parameters file Depending on the operating system for your VM, you could install GPU extension for Windows or for Linux. - ### [Windows](#tab/windows) To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtWindowsVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+#### Version 2205 and higher
+
+The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
+
+```json
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: windowsGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverWindows"
+ },
+ "typeHandlerVersion": {
+ "value": "1.5"
+ },
+ "settings": {
+ "value": {
+ "DriverURL" : "http://us.download.nvidia.com/tesla/511.65/511.65-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe",
+ "DriverCertificateUrl" : "https://go.microsoft.com/fwlink/?linkid=871664",
+ "DriverType":"CUDA"
+ }
+ }
+ }
+```
+
+#### Versions lower than 2205
+
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters: ```json
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
### [Linux](#tab/linux)
-To deploy Nvidia GPU drivers for an existing Linux VM, edit the parameters file and then deploy the template `addGPUextensiontoVM.json`.
+To deploy Nvidia GPU drivers for an existing Linux VM, edit the `addGPUExtWindowsVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+
+#### Version 2205 and higher
+
+If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters:
+
+```powershell
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: linuxGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverLinux"
+ },
+ "typeHandlerVersion": {
+ "value": "1.8"
+ },
+ "settings": {
+ }
+ }
+ }
+```
+
+#### Versions lower than 2205
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters:
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parame
} ```
-Here is a sample Ubuntu parameter file that was used in this article:
+Here's a sample Ubuntu parameter file that was used in this article:
```powershell {
Here is a sample Ubuntu parameter file that was used in this article:
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscription image (RHEL BYOS), make sure that: - You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md). -- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM is not properly registered, installation does not proceed as the VM is not entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.
+- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM isn't properly registered, installation doesn't proceed as the VM isn't entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.
- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscript
### [Windows](#tab/windows)
-Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+Deploy the template `addGPUextensiontoVM.json` to install the extension on an existing VM.
+
+Run the following command:
```powershell $templateFile = "<Path to addGPUextensiontoVM.json>" $templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
-$RGName = "<Name of your resource group>"
+RGName = "<Name of your resource group>"
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>" ``` > [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
-Here is a sample output:
-
-```powershell
-PS C:\WINDOWS\system32> "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
-C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json
-PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
-PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtWindowsVM.parameters.json"
-PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
-PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment3"
-
-DeploymentName : deployment3
-ResourceGroupName : myasegpuvm1
-ProvisioningState : Succeeded
-Timestamp : 12/16/2020 12:18:50 AM
-Mode : Incremental
-TemplateLink :
-Parameters :
+Here's a sample output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+ C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtWindowsVM.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+ PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment3"
+
+ DeploymentName : deployment3
+ ResourceGroupName : myasegpuvm1
+ ProvisioningState : Succeeded
+ Timestamp : 12/16/2020 12:18:50 AM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
Name Type Value =============== ========================= ========== vmName String VM2
Parameters :
"DriverType": "CUDA" }
-Outputs :
-DeploymentDebugLogLevel :
-PS C:\WINDOWS\system32>
-```
+ Outputs :
+ DeploymentDebugLogLevel :
+ PS C:\WINDOWS\system32>
+ ```
### [Linux](#tab/linux)
-Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+Deploy the template `addGPUextensiontoVM.json` to install the extension to an existing VM.
+
+Run the following command:
```powershell $templateFile = "Path to addGPUextensiontoVM.json"
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $tem
> [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
-Here is a sample output:
+Here's a sample output:
```powershell Copyright (C) Microsoft Corporation. All rights reserved.
Outputs :
DeploymentDebugLogLevel : PS C:\WINDOWS\system32> ```+ ## Track deployment ### [Windows](#tab/windows)
-To check the deployment state of extensions for a given VM, run the following command:
+To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
```powershell Get-AzureRmVMExtension -ResourceGroupName <Name of resource group> -VMName <Name of VM> -Name <Name of the extension> ```
-Here is a sample output:
+
+Here's a sample output:
```powershell PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM2 -Name windowsgpuext
A successful install is indicated by a `message` as `Enable Extension` and `stat
### [Linux](#tab/linux)
-Template deployment is a long running job. To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator). Run the following command:
+To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
```powershell Get-AzureRmVMExtension -ResourceGroupName myResourceGroup -VMName <VM Name> -Name <Extension Name> ```
-Here is a sample output:
+
+Here's a sample output:
```powershell Copyright (C) Microsoft Corporation. All rights reserved.
The extension execution output is logged to the following file: `/var/log/azure/
### [Windows](#tab/windows)
-Sign in to the VM and run the nvidia-smi command-line utility installed with the driver. The `nvidia-smi.exe` is located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`. If you do not see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
+Sign in to the VM and run the nvidia-smi command-line utility installed with the driver.
+
+#### Version 2205 and higher
+
+The `nvidia-smi.exe` is located at `C:\Windows\System32\nvidia-smi.exe`. If you don't see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
-If the driver is installed, you see an output similar to the following sample:
+#### Versions lower than 2205
+
+The `nvidia-smi.exe` is located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`. If you don't see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
+
+If the driver is installed, you see an output similar to the following sample:
```powershell PS C:\Users\Administrator> cd "C:\Program Files\NVIDIA Corporation\NVSMI"
Follow these steps to verify the driver installation:
1. Connect to the GPU VM. Follow the instructions in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-linux-vm).
- Here is a sample output:
+ Here's a sample output:
```powershell PS C:\WINDOWS\system32> ssh -l Administrator 10.57.50.60
Follow these steps to verify the driver installation:
Administrator@VM1:~$ ```
-2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you will be able to run the utility and see the following output:
+2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you'll be able to run the utility and see the following output:
```powershell Administrator@VM1:~$ nvidia-smi
For more information, see [Nvidia GPU driver extension for Linux](../virtual-mac
> [!NOTE] > After you finish installing the GPU driver and GPU extension, you no longer need to use a port with Internet access for compute. - - ## Remove GPU extension To remove the GPU extension, use the following command: `Remove-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Extension name>`
-Here is a sample output:
+Here's a sample output:
```powershell PS C:\azure-stack-edge-deploy-vms> Remove-AzureRmVMExtension -ResourceGroupName rgl -VMName WindowsVM -Name windowsgpuext
Requestld IsSuccessStatusCode StatusCode ReasonPhrase
True OK OK ``` - ## Next steps Learn how to:
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 04/11/2022 Last updated : 05/25/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
|Edge resource group |Select the resource group to add the image to. | |Save image as | The name for the VM image that you're creating from the VHD you uploaded to the storage account. | |OS type |Choose from Windows or Linux as the operating system of the VHD you'll use to create the VM image. |
+ |VM generation |Choose Gen 1 or Gen 2 as the generation of the image you'll use to create the VM. |
- ![Screenshot showing the Add image page for a virtual machine, with the Add button highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
+ ![Screenshot showing the Add image page for a virtual machine with the Add button highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
1. The VHD is downloaded, and the VM image is created. Image creation takes several minutes to complete. You'll see a notification for the successful completion of the VM image.<!--There's a fleeting notification that image creation is in progress, but I didn't see any notification that image creation completed successfully.-->
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
1. After the VM image is successfully created, it's added to the list of images on the **Images** pane.
- ![Screenshot that shows the Images pane in Virtual Machines view of an Azure Stack Edge device. The entry for a VM image is highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
+ ![Screenshot that shows the Images pane in Virtual Machines view of an Azure Stack Edge device.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
The **Deployments** pane updates to indicate the status of the deployment.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md
Previously updated : 03/08/2021 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs.
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
Location : DBELocal Tags :
- New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig201221071831 -ResourceGroupName rg201221071831
+ New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig201221071831 -ResourceGroupName rg201221071831 -HyperVGeneration V1
ResourceGroupName : rg201221071831 SourceVirtualMachine :
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 04/18/2022 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
You'll now create a VM image from the managed disk.
$DiskSize = "<Size greater than or equal to size of source managed disk>" $OsType = "<linux or windows>" $ImageName = "<Image name>"
+ $hyperVGeneration = "<Generation of the image: V1 or V2>"
``` 1. Create a VM image. The supported OS types are Linux and Windows. ```powershell
- $imageConfig = New-AzImageConfig -Location DBELocal
+ $imageConfig = New-AzImageConfig -Location DBELocal -HyperVGeneration $hyperVGeneration
$ManagedDiskId = (Get-AzDisk -Name $DiskName -ResourceGroupName $ResourceGroupName).Id Set-AzImageOsDisk -Image $imageConfig -OsType $OsType -OsState 'Generalized' -DiskSizeGB $DiskSize -ManagedDiskId $ManagedDiskId New-AzImage -Image $imageConfig -ImageName $ImageName -ResourceGroupName $ResourceGroupName
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
Previously updated : 04/22/2022 Last updated : 05/25/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
-# Deploy VMs on your Azure Stack Edge Pro GPU device via templates
+# Deploy VMs on your Azure Stack Edge Pro GPU device via templates
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
The file `CreateImage.parameters.json` takes the following parameters:
"imageUri": { "value": "<Path to the VHD that you uploaded in the Storage account>" },
+ "hyperVGeneration": {
+ "type": "string",
+ "value": "<Generation of the VM, V1 or V2>
+ },
} ``` Edit the file `CreateImage.parameters.json` to include the following values for your Azure Stack Edge Pro device:
-1. Provide the OS type corresponding to the VHD you'll upload. The OS type can be Windows or Linux.
+1. Provide the OS type and Hyper V Generation corresponding to the VHD you'll upload. The OS type can be Windows or Linux and the VM Generation can be V1 or V2.
```json "parameters": { "osType": { "value": "Windows"
- },
+ },
+ "hyperVGeneration": {
+ "value": "V2"
+ },
+ }
``` 2. Change the image URI to the URI of the image you uploaded in the earlier step:
Edit the file `CreateImage.parameters.json` to include the following values for
"osType": { "value": "Linux" },
+ "hyperVGeneration": {
+ "value": "V1"
+ },
"imageName": { "value": "myaselinuximg" }, "imageUri": { "value": "https://sa2.blob.myasegpuvm.wdshcsso.com/con1/ubuntu18.04waagent.vhd"
- }
+ }
} } ```
databox-online Azure Stack Edge Gpu Prepare Windows Generalized Image Iso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md
To create your new virtual machine, follow these steps:
![New Virtual Machine wizard, Specify Name and Location](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-08.png)
-4. Under **Specify Generation**, select **Generation 1**. Then select **Next >**.
+4. Under **Specify Generation**, select **Generation 1** or **Generation 2**. Then select **Next >**.
![New Virtual Machine wizard, Choose the generation of virtual machine to create](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-09.png)
databox-online Azure Stack Edge Gpu Prepare Windows Vhd Generalized Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md
Previously updated : 06/18/2021 Last updated : 05/18/2022 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
You'll use this fixed-size VHD for all the subsequent steps in this article.
![Specify name and location for your VM](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-2.png)
-1. On the **Specify generation** page, choose **Generation 1** for the .vhd device image type, and then select **Next**.
+1. On the **Specify generation** page, choose **Generation 1** or **Generation 2** for the .vhd device image type, and then select **Next**.
![Specify generation](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-3.png)
databox-online Azure Stack Edge Gpu Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-quickstart.md
Before you deploy, make sure that following prerequisites are in place:
5. **Configure compute network**: Create a virtual switch by enabling a port on your device. Enter 2 free, contiguous static IPs for Kubernetes nodes in the same network that you created the switch. Provide at least 1 static IP for IoT Edge Hub service to access compute modules and 1 static IP for each extra service or container that you want to access from outside the Kubernetes cluster.
- Kubernetes is required to deploy all containerized workloads. See more information on [Compute network settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips).
+ Kubernetes is required to deploy all containerized workloads. See more information on [Compute network settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches).
6. **Configure web proxy**: If you use web proxy in your environment, enter web proxy server IP in `http://<web-proxy-server-FQDN>:<port-id>`. Set authentication to **None**. See more information on [Web proxy settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-web-proxy).
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
Previously updated : 08/02/2021 Last updated : 05/26/2022 # Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU
This article gives guidance for resolving the most common issues that cause inst
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md?tabs=linux).
+## In versions lower than 2205, Linux GPU extension installs old signing keys: signature and/or required key missing
+
+**Error description:** The Linux GPU extension installs old signing keys, preventing download of the required GPU driver. In this case, you'll see the following error in the syslog of the Linux VM:
+
+ ```powershell
+ /var/log/syslog and /var/log/waagent.log
+ May  5 06:04:53 gpuvm12 kernel: [  833.601805] nvidia:module verification failed: signature and/or required key missing- tainting kernel
+ ```
+**Suggested solutions:** You have two options to mitigate this issue:
+
+- **Option 1:** Apply the Azure Stack Edge 2205 updates to your device.
+- **Option 2:** After creating a GPU virtual machine of size in NCasT4_v3-series, manually install the new signing keys before installing the extension, then set required signing keys using steps in [Updating the CUDA Linux GPG Repository Key | NVIDIA Technical Blog](https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key/).
+
+ Here's an example that installs signing keys on an Ubuntu 1804 virtual machine:
+
+ ```powershell
+ $ sudo apt-key adv --fetch-
+ keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
+ ```
+
+## Failure to install GPU extension on a Windows 2016 VHD
+
+**Error description:** This is a known issue in versions lower than 2205. The GPU extension requires TLS 1.2. In this case, you may see the following error message:
+
+ ```azurecli
+ Failed to download https://go.microsoft.com/fwlink/?linkid=871664 after 10 attempts. Exiting!
+ ```
+
+Additional details:
+
+- Check the guest log for the associated error. To collect the guest logs, see [Collect guest logs for VMs on an Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).
+- On a Linux VM, look in `/var/log/waagent.log` or `/var/log/azure/nvidia-vmext-status`.
+- On a Windows VM, find the error status in `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status`.
+- Review the complete execution log in `C:\WindowsAzure\Logs\WaAppAgent.txt`.
+
+If the installation failed during the package download, that error indicates the VM couldn't access the public network to download the driver.
++
+**Suggested solution:** Use the following steps to enable TLS 1.2 on a Windows 2016 VM, and then deploy the GPU extension.
+
+1. Run the following command inside the VM to enable TLS 1.2:
+
+ ```powershell
+ sp hklm:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319 SchUseStrongCrypto 1
+ ```
+
+1. Deploy the template `addGPUextensiontoVM.json` to install the extension on an existing VM. You can install the extension manually, or you can install the extension from the Azure portal.
+
+ - To install the extension manually, see [Install GPU extension on VMs for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md)
+ - To install the template using the Azure portal, see [Deploy GPU VMs on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+
+ > [!NOTE]
+ > The extension deployment is a long running job and takes about 10 minutes to complete.
+
+## Manually install the Nvidia driver on RHEL 7
+
+**Error description:** When installing the GPU extension on an RHEL 7 VM, the installation may fail due to a certificate rotation issue and an incompatible driver version.
+
+**Suggested solution:** In this case, you have two options:
+
+- **Option 1:** Resolve the certificate rotation issue and then install an Nvidia driver lower than version 510.
+
+ 1. To resolve the certificate rotation issue, run the following command:
+
+ ```powershell
+ $ sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/$arch/cuda-rhel7.repo
+ ```
+
+ 1. Install an Nvidia driver lower than version 510.
+
+- **Option 2:** Deploy the GPU extension. Use the following settings when deploying the ARM extension:
+
+ ```powershell
+ settings": {
+ "isCustomInstall": true,
+ "InstallMethod": 0,
+ "DRIVER_URL": " https://developer.download.nvidia.com/compute/cuda/11.4.4/local_installers/cuda-repo-rhel7-11-4-local-11.4.4_470.82.01-1.x86_64.rpm",
+ "DKMS_URL" : " https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm",
+ "LIS_URL": " https://aka.ms/lis",
+ "LIS_RHEL_ver": "3.10.0-1062.9.1.el7"
+ }
+ ```
+ ## VM size is not GPU VM size **Error description:** A GPU VM must be either Standard_NC4as_T4_v3 or Standard_NC8as_T4_v3 size. If any other VM size is used, the GPU extension will fail to be attached.
databox-online Azure Stack Edge Gpu Virtual Machine Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-overview.md
Previously updated : 04/21/2022 Last updated : 05/18/2022
You can run a maximum of 24 VMs on your device. This is another factor to consid
### Operating system disks and images
-On your device, you can only use Generation 1 VMs with a fixed virtual hard disk (VHD) format. VHDs are used to store the machine operating system (OS) and data. VHDs are also used for the images you use to install an OS.
+On your device, you can use Generation 1 or Generation 2 VMs with a fixed virtual hard disk (VHD) format. VHDs are used to store the machine operating system (OS) and data. VHDs are also used for the images you use to install an OS.
The images that you use to create VM images can be generalized or specialized. When creating images for your VMs, you must prepare the images. See the various ways to prepare and use VM images on your device:
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
In this tutorial, you learn how to:
Before you set up a compute role on your Azure Stack Edge Pro device, make sure that: - You've activated your Azure Stack Edge Pro 2 device as described in [Activate Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-activate.md).-- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and:
+- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) and:
- Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
> > :::image type="content" source="media/defender-for-containers/enable-defender-for-containers.png" alt-text="Enable Microsoft Defender for Containers from the Defender plans page."::: >
-> Learn more about this change in [the release note](release-notes.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
+> Learn more about this change in [the release note](release-notes-archive.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
|Aspect|Details| |-|:-|
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
> > :::image type="content" source="media/defender-for-containers/enable-defender-for-containers.png" alt-text="Enable Microsoft Defender for Containers from the Defender plans page."::: >
-> Learn more about this change in [the release note](release-notes.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
+> Learn more about this change in [the release note](release-notes-archive.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
|Aspect|Details|
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
+
+ Title: Driving your organization to remediate security issues with recommendation governance in Microsoft Defender for Cloud
+description: Learn how to assign owners and due dates to security recommendations and create rules to automatically assign owners and due dates
+++++ Last updated : 05/29/2022+
+# Drive your organization to remediate security recommendations with governance
+
+Security teams are responsible for improving the security posture of their organizations but they may not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
+
+Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations.
+
+## Building an automated process for improving security with governance rules
+
+To make sure your organization is systematically improving its security posture, you can define rules that assign an owner and set the due date for resources in the specified recommendations. That way resource owners have a clear set of tasks and deadlines for remediating recommendations.
+
+You can then review the progress of the tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention.
+
+### Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
+|Pricing:|Free|
+|Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
+
+### Defining governance rules to automatically set the owner and due date of recommendations
+
+Governance rules can identify resources that require remediation according to specific recommendations or severities, and the rule assigns an owner and due date to make sure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
+
+The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1st and the remediation timeframe is 14 days, March 15th is the due date. You can apply a grace period so that the resources that are given a due date don't impact your secure score until they're overdue.
+
+You can also set the owner of the resources that are affected by the specified recommendations. In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag.
+
+By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks. If an email for the owner's manager is found in the organizational Azure Active Directory (Azure AD), the owner's manager receives a weekly email showing any overdue recommendations by default.
++
+To define a governance rule that assigns an owner and due date:
+
+1. In the **Environment settings**, select the Azure subscription, AWS account, or Google project that you want to define the rule for.
+1. In **Governance rules (preview)**, select **Add rule**.
+1. Enter a name for the rule.
+1. Set a priority for the rule. You can see the priority for the existing rules in the list of governance rules.
+1. Select the recommendations that the rule applies to, either:
+ - **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.
+ - **By name** - Select the specific recommendations that the rule applies to.
+1. Set the owner to assign to the recommendations either:
+ - **By resource tag** - Enter the resource tag on your resources that defines the resource owner.
+ - **By email address** - Enter the email address of the owner to assign to the recommendations.
+1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due.
+1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
+1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options.
+1. Select **Create**.
+
+If there are existing recommendations that match the definition of the governance rule, you can either:
+
+- Assign an owner and due date to recommendations that don't already have an owner or due date.
+- Overwrite the owner and due date of existing recommendations.
+
+## Manually assigning owners and due dates for recommendation remediation
+
+For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that are given a due date don't impact your secure score unless they become overdue.
+
+To manually assign owners and due dates to recommendations:
+
+1. Go to the list of recommendations:
+ - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
+ - Go to **Recommendations** in the Defender for Cloud menu.
+1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score.
+
+ > [!TIP]
+ > You can also use the search box and filters above the list of recommendations to find specific recommendations.
+
+1. Select a recommendation to see the affected resources.
+1. For any resource that doesn't have an owner or due date, select the resources and select **Assign owner**.
+1. Enter the email address of the owner that needs to make the changes that remediate the recommendation for those resources.
+1. Select the date by which to remediate the recommendation for the resources.
+1. You can select **Apply grace period** to keep the resource from impacting the secure score until it's overdue.
+1. Select **Save**.
+
+The recommendation is now shown as assigned and on time.
+
+## Tracking the status of recommendations for further action
+
+After you define governance rules, you'll want to review the progress that the owners are making in remediating the recommendations.
+
+You can track the assigned and overdue recommendations in:
+
+- The security posture shows the number of unassigned and overdue recommendations.
+
+ :::image type="content" source="./media/governance-rules/governance-in-security-posture.png" alt-text="Screenshot of governance status in the security posture.":::
+
+- The list of recommendations shows the governance status of each recommendation.
+
+ :::image type="content" source="./media/governance-rules/governance-in-recommendations.png" alt-text="Screenshot of recommendations with their governance status." lightbox="media/governance-rules/governance-in-recommendations.png":::
+
+- The governance report in the governance rules settings lets you drill down into recommendations by rule and owner.
+
+ :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png":::
+
+### Tracking progress by rule with the governance report
+
+The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned.
+
+To review the status of the recommendations in a rule:
+
+1. In **Recommendations**, select **Governance report (preview)**.
+1. Select the subscriptions that you want to review.
+1. Select the rules that you want to see details about.
+
+You can see the list of owners and recommendations for the selected rules, and their status.
+
+To see the list of recommendations for each owner:
+
+1. Select **Security posture**.
+1. Select the **Owner (preview)** tab to see the list of owners and the number of overdue recommendations for each owner.
+
+ - Hover over the (i) in the overdue recommendations to see the breakdown of overdue recommendations by severity.
+
+ - If the owner email address is found in the organizational Azure Active Directory (Azure AD), you'll see the full name and picture of the owner.
+
+1. Select **View recommendations** to go to the list of recommendations associated with the owner.
+
+## Next steps
+
+In this article, you learned how to set up a process for assigning owners and due dates to tasks so that owners are accountable for taking steps to improve your security posture.
+
+Check out how owners can [set ETAs for tasks](review-security-recommendations.md#manage-the-owner-and-eta-of-recommendations-that-are-assigned-to-you) so that they can manage their progress.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Implement security recommendations in Microsoft Defender for Cloud | Microsoft Docs description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies.-+ Last updated 11/09/2021
To simplify remediation and improve your environment's security (and increase yo
> [!TIP] > The **Fix** feature is only available for specific recommendations. To find recommendations that have an available fix, use the **Response actions** filter for the list of recommendations:
->
+>
> :::image type="content" source="media/implement-security-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the Fix option."::: To implement a **Fix**:
-1. From the list of recommendations that have the **Fix** action icon, :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation.
+1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation.
:::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
To implement a **Fix**:
The remediation operation uses a template deployment or REST API `PATCH` request to apply the configuration on the resource. These operations are logged in [Azure activity log](../azure-monitor/essentials/activity-log.md). - ## Next steps In this document, you were shown how to remediate recommendations in Defender for Cloud. To learn how recommendations are defined and selected for your environment, see the following page:
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 06/01/2022 Last updated : 06/06/2022 zone_pivot_groups: connect-gcp-accounts
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## December 2021
+
+Updates in December include:
+
+- [Microsoft Defender for Containers plan released for general availability (GA)](#microsoft-defender-for-containers-plan-released-for-general-availability-ga)
+- [New alerts for Microsoft Defender for Storage released for general availability (GA)](#new-alerts-for-microsoft-defender-for-storage-released-for-general-availability-ga)
+- [Improvements to alerts for Microsoft Defender for Storage](#improvements-to-alerts-for-microsoft-defender-for-storage)
+- ['PortSweeping' alert removed from network layer alerts](#portsweeping-alert-removed-from-network-layer-alerts)
+
+### Microsoft Defender for Containers plan released for general availability (GA)
+
+Over two years ago, we introduced [Defender for Kubernetes](defender-for-kubernetes-introduction.md) and [Defender for container registries](defender-for-container-registries-introduction.md) as part of the Azure Defender offering within Microsoft Defender for Cloud.
+
+With the release of [Microsoft Defender for Containers](defender-for-containers-introduction.md), we've merged these two existing Defender plans.
+
+The new plan:
+
+- **Combines the features of the two existing plans** - threat detection for Kubernetes clusters and vulnerability assessment for images stored in container registries
+- **Brings new and improved features** - including multicloud support, host level threat detection with over **sixty** new Kubernetes-aware analytics, and vulnerability assessment for running images
+- **Introduces Kubernetes-native at-scale onboarding** - by default, when you enable the plan all relevant components are configured to be deployed automatically
+
+With this release, the availability and presentation of Defender for Kubernetes and Defender for container registries has changed as follows:
+
+- New subscriptions - The two previous container plans are no longer available
+- Existing subscriptions - Wherever they appear in the Azure portal, the plans are shown as **Deprecated** with instructions for how to upgrade to the newer plan
+ :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
+
+The new plan is free for the month of December 2021. For the potential changes to the billing from the old plans to Defender for Containers, and for more information on the benefits introduced with this plan, see [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317).
+
+For more information, see:
+
+- [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md)
+- [Enable Microsoft Defender for Containers](defender-for-containers-enable.md)
+- [Introducing Microsoft Defender for Containers - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
+- [Microsoft Defender for Containers | Defender for Cloud in the Field #3 - YouTube](https://www.youtube.com/watch?v=KeH0a3enLJ0&t=201s)
+
+### New alerts for Microsoft Defender for Storage released for general availability (GA)
+
+Threat actors use tools and scripts to scan for publicly open containers in the hope of finding misconfigured open storage containers with sensitive data.
+
+Microsoft Defender for Storage detects these scanners so that you can block them and remediate your posture.
+
+The preview alert that detected this was called **ΓÇ£Anonymous scan of public storage containersΓÇ¥**. To provide greater clarity about the suspicious events discovered, we've divided this into **two** new alerts. These alerts are relevant to Azure Blob Storage only.
+
+We've improved the detection logic, updated the alert metadata, and changed the alert name and alert type.
+
+These are the new alerts:
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|||--|-|
+| **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium |
+| **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low |
+
+For more information, see:
+
+- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)
+- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)
+- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)
+
+### Improvements to alerts for Microsoft Defender for Storage
+
+The initial access alerts now have improved accuracy and more data to support investigation.
+
+Threat actors use various techniques in the initial access to gain a foothold within a network. Two of the [Microsoft Defender for Storage](defender-for-storage-introduction.md) alerts that detect behavioral anomalies in this stage now have improved detection logic and additional data to support investigations.
+
+If you've [configured automations](workflow-automation.md) or defined [alert suppression rules](alerts-suppression-rules.md) for these alerts in the past, update them in accordance with these changes.
+
+#### Detecting access from a Tor exit node
+
+Access from a Tor exit node might indicate a threat actor trying to hide their identity.
+
+The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+
+An outlying pattern will have high severity, while less anomalous patterns will have medium severity.
+
+The alert name and description have been updated. The AlertType remains unchanged.
+
+- Alert name (old): Access from a Tor exit node to a storage account
+- Alert name (new): Authenticated access from a Tor exit node
+- Alert types: Storage.Blob_TorAnomaly / Storage.Files_TorAnomaly
+- Description: One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2
+- MITRE tactic: Initial access
+- Severity: High/Medium
+
+#### Unusual unauthenticated access
+
+A change in access patterns may indicate that a threat actor was able to exploit public read access to storage containers, either by exploiting a mistake in access configurations, or by changing the access permissions.
+
+This medium severity alert is now tuned with improved behavioral logic, higher accuracy, and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+
+The alert name and description have been updated. The AlertType remains unchanged.
+
+- Alert name (old): Anonymous access to a storage account
+- Alert name (new): Unusual unauthenticated access to a storage container
+- Alert types: Storage.Blob_AnonymousAccessAnomaly
+- Description: This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s). Applies to: Azure Blob Storage
+- MITRE tactic: Collection
+- Severity: Medium
+
+For more information, see:
+
+- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)
+- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)
+- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)
+
+### 'PortSweeping' alert removed from network layer alerts
+
+The following alert was removed from our network layer alerts due to inefficiencies:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+||-|:--:||
+| **Possible outgoing port scanning activity detected**<br>(PortSweeping) | Network traffic analysis detected suspicious outgoing traffic from %{Compromised Host}. This traffic may be a result of a port scanning activity. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). If this behavior is intentional, please note that performing port scanning is against Azure Terms of service. If this behavior is unintentional, it may mean your resource has been compromised. | Discovery | Medium |
+ ## November 2021 Our Ignite release includes:
Other changes in November include:
- [New AKS security policy added to default initiative ΓÇô for use by private preview customers only](#new-aks-security-policy-added-to-default-initiative--for-use-by-private-preview-customers-only) - [Inventory display of on-premises machines applies different template for resource name](#inventory-display-of-on-premises-machines-applies-different-template-for-resource-name) - ### Azure Security Center and Azure Defender become Microsoft Defender for Cloud According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multicloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
According to the [2021 State of the Cloud report](https://info.flexera.com/CM-RE
At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud**, reflects the integrated capabilities of our security offering and our ability to support any cloud platform. - ### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2
-A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
+A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
:::image type="content" source="media/release-notes/add-aws-account.png" alt-text="Use the new environment settings page to connect your AWS accounts.":::
When you've added your AWS accounts, Defender for Cloud protects your AWS resour
Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). - ### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)+ Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments. To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multicloud, and on-premises workloads.
The integration with Microsoft Purview extends your security visibility in Defen
Learn more in [Prioritize security actions by data sensitivity](information-protection.md). - ### Expanded security control assessments with Azure Security Benchmark v3
-Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
+
+Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
[Azure Security Benchmark](../security/benchmarks/introduction.md) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. From Ignite 2021, Azure Security Benchmark **v3** is available in [Defender for Cloud's regulatory compliance dashboard](update-regulatory-compliance-packages.md) and enabled as the new default initiative for all Azure subscriptions protected with Microsoft
-Defender for Cloud.
+Defender for Cloud.
-Enhancements for v3 include:
+Enhancements for v3 include:
- Additional mappings to industry frameworks [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf) and [CIS Controls v8](https://www.cisecurity.org/controls/v8/). - More granular and actionable guidance for controls with the introduction of:
- - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
- - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
+ - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
+ - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
- New controls include DevOps security for issues such as threat modeling and software supply chain security, as well as key and certificate management for best practices in Azure. Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction). - ### Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA) In July, [we announced](release-notes-archive.md#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
SecOps teams can choose the relevant Microsoft Sentinel workspace directly from
The new recommendation, "Diagnostic logs in Kubernetes services should be enabled" includes the 'Fix' option for faster remediation.
-We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
-
+We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
### Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)
In October, [we announced](release-notes-archive.md#microsoft-threat-and-vulnera
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
-Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
+Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
When Defender for Endpoint detects a threat, it triggers an alert. The alert is
Learn more in [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). - ### Snapshot export for recommendations and security findings (in preview) Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment.
In October, [we announced](release-notes-archive.md#software-inventory-filters-a
You can query the software inventory data in **Azure Resource Graph Explorer**.
-To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory).
To ensure that Kubernetes workloads are secure by default, Defender for Cloud in
As part of this project, we've added a policy and recommendation (disabled by default) for gating deployment on Kubernetes clusters. The policy is in the default initiative but is only relevant for organizations who register for the related private preview.
-You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
+You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
If you'd like to participate in the private preview, you'll need to be a member of the private preview ring. If you're not already a member, submit a request [here](https://aka.ms/atscale). Members will be notified when the preview begins.
Updates in October include:
- [Recommendations details pages now show related recommendations](#recommendations-details-pages-now-show-related-recommendations) - [New alerts for Azure Defender for Kubernetes (in preview)](#new-alerts-for-azure-defender-for-kubernetes-in-preview) - ### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution (in preview)
-We've extended the integration between [Azure Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
+We've extended the integration between [Azure Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
-Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
+Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
Learn more in [Automatically configure vulnerability assessment for your machine
### Software inventory filters added to asset inventory (in preview)
-The [asset inventory](asset-inventory.md) page now includes a filter to select machines running specific software - and even specify the versions of interest.
+The [asset inventory](asset-inventory.md) page now includes a filter to select machines running specific software - and even specify the versions of interest.
Additionally, you can query the software inventory data in **Azure Resource Graph Explorer**.
-To use these new features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+To use these new features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory). :::image type="content" source="media/deploy-vulnerability-assessment-tvm/software-inventory.png" alt-text="If you've enabled the threat and vulnerability solution, Security Center's asset inventory offers a filter to select resources by their installed software.":::
-### Changed prefix of some alert types from "ARM_" to "VM_"
+### Changed prefix of some alert types from "ARM_" to "VM_"
-In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
As part of a logical reorganization of some of the Azure Defender plans, we moved twenty-one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for Servers](defender-for-servers-introduction.md).
With this update, we've changed the prefixes of these alerts to match this reass
| ARM_VMAccessUnusualPasswordReset | VM_VMAccessUnusualPasswordReset | | ARM_VMAccessUnusualSSHReset | VM_VMAccessUnusualSSHReset | - Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for Servers](defender-for-servers-introduction.md) plans. ### Changes to the logic of a security recommendation for Kubernetes clusters
-The recommendation "Kubernetes clusters should not use the default namespace" prevents usage of the default namespace for a range of resource types. Two of the resource types that were included in this recommendation have been removed: ConfigMap and Secret.
+The recommendation "Kubernetes clusters should not use the default namespace" prevents usage of the default namespace for a range of resource types. Two of the resource types that were included in this recommendation have been removed: ConfigMap and Secret.
Learn more about this recommendation and hardening your Kubernetes clusters in [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md). ### Recommendations details pages now show related recommendations
-To clarify the relationships between different recommendations, we've added a **Related recommendations** area to the details pages of many recommendations.
+To clarify the relationships between different recommendations, we've added a **Related recommendations** area to the details pages of many recommendations.
The three relationship types that are shown on these pages are:
Obviously, Security Center can't notify you about discovered vulnerabilities unl
Therefore:
+- Recommendation #1 is a prerequisite for recommendation #2
+- Recommendation #2 depends upon recommendation #1
:::image type="content" source="media/release-notes/related-recommendations-solution-not-found.png" alt-text="Screenshot of recommendation to deploy vulnerability assessment solution."::: :::image type="content" source="media/release-notes/related-recommendations-vulnerabilities-found.png" alt-text="Screenshot of recommendation to resolve discovered vulnerabilities."::: -- ### New alerts for Azure Defender for Kubernetes (in preview) To expand the threat protections provided by Azure Defender for Kubernetes, we've added two preview alerts.
These alerts are generated based on a new machine learning model and Kubernetes
| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment that is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored by this analytics include the container image registry used, the account performing the deployment, day of the week, how often does this account performs pod deployments, user agent used in the operation, is this a namespace which is pod deployment occur to often, or other feature. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | Execution | Medium | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. From examining role assignments, the listed permissions are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Azure Defender. | Privilege Escalation | Low | - For