Updates from: 06/07/2022 01:18:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
This article provides you with the information you need to configure wildcard ap
- Note - Regular application will always take precedence over a complex app (wildcard application). ## Pre-requisites
-Before you get started with single sign-on for header-based authentication apps, make sure your environment is ready with the following settings and configurations:
+Before you get started with Application Proxy Complex application scenario apps, make sure your environment is ready with the following settings and configurations:
- You need to enable Application Proxy and install a connector that has line of site to your applications. See the tutorial [Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad) to learn how to prepare your on-premises environment, install and register a connector, and test the connector.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/nymi-band | | Octatco | ![y] | ![y]| ![n]| ![n]| ![n] | https://octatco.com/ | | OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| Swissbit | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.swissbit.com/en/products/ishield-fido2/ |
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
The following providers offer FIDO2 security keys of different form factors that
| Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | + <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Title: Configure a Temporary Access Pass in Azure AD to register Passwordless authentication methods
-description: Learn how to configure and enable users to to register Passwordless authentication methods by using a Temporary Access Pass
+description: Learn how to configure and enable users to register Passwordless authentication methods by using a Temporary Access Pass
Previously updated : 10/22/2021 Last updated : 05/24/2022 -+ -+
-# Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods (Preview)
+# Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods
Passwordless authentication methods, such as FIDO2 and Passwordless Phone Sign-in through the Microsoft Authenticator app, enable users to sign in securely without a password. Users can bootstrap Passwordless methods in one of two ways:
Users can bootstrap Passwordless methods in one of two ways:
- Using existing Azure AD Multi-Factor Authentication methods - Using a Temporary Access Pass (TAP)
-A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
+A Temporary Access Pass is a time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones such as Microsoft Authenticator or even Windows Hello.
A Temporary Access Pass also makes recovery easier when a user has lost or forgotten their strong authentication factor like a FIDO2 security key or Microsoft Authenticator app, but needs to sign in to register new strong authentication methods. This article shows you how to enable and use a Temporary Access Pass in Azure AD using the Azure portal. You can also perform these actions using the REST APIs.
->[!NOTE]
->Temporary Access Pass is currently in public preview. Some features might not be supported or have limited capabilities.
- ## Enable the Temporary Access Pass policy A Temporary Access Pass policy defines settings, such as the lifetime of passes created in the tenant, or the users and groups who can use a Temporary Access Pass to sign-in.
-Before anyone can sign in with a Temporary Access Pass, you need to enable the authentication method policy and choose which users and groups can sign in by using a Temporary Access Pass.
+Before anyone can sign-in with a Temporary Access Pass, you need to enable Temporary Access Pass in the authentication method policy and choose which users and groups can sign in by using a Temporary Access Pass.
Although you can create a Temporary Access Pass for any user, only those included in the policy can sign-in with it. Global administrator and Authentication Method Policy administrator role holders can update the Temporary Access Pass authentication method policy. To configure the Temporary Access Pass authentication method policy:
-1. Sign in to the Azure portal as a Global admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
-1. Click **Yes** to enable the policy, select which users have the policy applied, and any **General** settings.
+1. Sign in to the Azure portal as a Global admin or Authentication Policy admin and click **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**.
+![Screenshot of how to manage Temporary Access Pass within the authentication method policy experience.](./media/how-to-authentication-temporary-access-pass/policy.png)
+1. Set Enable to **Yes** to enable the policy, select which users have the policy applied.
+![Screenshot of how to enable the Temporary Access Pass authentication method policy.](./media/how-to-authentication-temporary-access-pass/policy-scope.png)
+1. (Optional) Click **Configure** and modify the default Temporary Access Pass settings, such as setting maximum lifetime, or length.
+![Screenshot of how to customize the settings for Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/policy-settings.png)
+1. Click **Save** to apply the policy.
+
- ![Screenshot of how to enable the Temporary Access Pass authentication method policy](./media/how-to-authentication-temporary-access-pass/policy.png)
The default value and the range of allowed values are described in the following table.
To configure the Temporary Access Pass authentication method policy:
| Setting | Default values | Allowed values | Comments | ||||| | Minimum lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Minimum number of minutes that the Temporary Access Pass is valid. |
- | Maximum lifetime | 24 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
+ | Maximum lifetime | 8 hours | 10 ΓÇô 43200 Minutes (30 days) | Maximum number of minutes that the Temporary Access Pass is valid. |
| Default lifetime | 1 hour | 10 ΓÇô 43200 Minutes (30 days) | Default values can be override by the individual passes, within the minimum and maximum lifetime configured by the policy. | | One-time use | False | True / False | When the policy is set to false, passes in the tenant can be used either once or more than once during its validity (maximum lifetime). By enforcing one-time use in the Temporary Access Pass policy, all passes created in the tenant will be created as one-time use. | | Length | 8 | 8-48 characters | Defines the length of the passcode. |
These roles can perform the following actions related to a Temporary Access Pass
1. Click **Azure Active Directory**, browse to Users, select a user, such as *Chris Green*, then choose **Authentication methods**. 1. If needed, select the option to **Try the new user authentication methods experience**. 1. Select the option to **Add authentication methods**.
-1. Below **Choose method**, click **Temporary Access Pass (Preview)**.
+1. Below **Choose method**, click **Temporary Access Pass**.
1. Define a custom activation time or duration and click **Add**.
- ![Screenshot of how to create a Temporary Access Pass](./media/how-to-authentication-temporary-access-pass/create.png)
+ ![Screenshot of how to create a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/create.png)
1. Once added, the details of the Temporary Access Pass are shown. Make a note of the actual Temporary Access Pass value. You provide this value to the user. You can't view this value after you click **Ok**.
- ![Screenshot of Temporary Access Pass details](./media/how-to-authentication-temporary-access-pass/details.png)
+ ![Screenshot of Temporary Access Pass details.](./media/how-to-authentication-temporary-access-pass/details.png)
The following commands show how to create and get a Temporary Access Pass by using PowerShell:
The following commands show how to create and get a Temporary Access Pass by usi
# Create a Temporary Access Pass for a user $properties = @{} $properties.isUsableOnce = $True
-$properties.startDateTime = '2021-03-11 06:00:00'
+$properties.startDateTime = '2022-05-23 06:00:00'
$propertiesJSON = $properties | ConvertTo-Json New-MgUserAuthenticationTemporaryAccessPassMethod -UserId user2@contoso.com -BodyParameter $propertiesJSON Id CreatedDateTime IsUsable IsUsableOnce LifetimeInMinutes MethodUsabilityReason StartDateTime TemporaryAccessPass -- -- -- - -
-c5dbd20a-8b8f-4791-a23f-488fcbde3b38 9/03/2021 11:19:17 PM False True 60 NotYetValid 11/03/2021 6:00:00 AM TAPRocks!
+c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True 60 NotYetValid 23/05/2022 6:00:00 AM TAPRocks!
# Get a user's Temporary Access Pass Get-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com Id CreatedDateTime IsUsable IsUsableOnce LifetimeInMinutes MethodUsabilityReason StartDateTime TemporaryAccessPass -- -- -- - -
-c5dbd20a-8b8f-4791-a23f-488fcbde3b38 9/03/2021 11:19:17 PM False True 60 NotYetValid 11/03/2021 6:00:00 AM
+c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True 60 NotYetValid 23/05/2022 6:00:00 AM
``` ## Use a Temporary Access Pass
-The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
+The most common use for a Temporary Access Pass is for a user to register authentication details during the first sign-in or device setup, without the need to complete additional security prompts. Authentication methods are registered at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). Users can also update existing authentication methods here.
1. Open a web browser to [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo). 1. Enter the UPN of the account you created the Temporary Access Pass for, such as *tapuser@contoso.com*. 1. If the user is included in the Temporary Access Pass policy, they will see a screen to enter their Temporary Access Pass. 1. Enter the Temporary Access Pass that was displayed in the Azure portal.
- ![Screenshot of how to enter a Temporary Access Pass](./media/how-to-authentication-temporary-access-pass/enter.png)
+ ![Screenshot of how to enter a Temporary Access Pass.](./media/how-to-authentication-temporary-access-pass/enter.png)
>[!NOTE] >For federated domains, a Temporary Access Pass is preferred over federation. A user with a Temporary Access Pass will complete the authentication in Azure AD and will not get redirected to the federated Identity Provider (IdP).
The user is now signed in and can update or register a method such as FIDO2 secu
Users who update their authentication methods due to losing their credentials or device should make sure they remove the old authentication methods. Users can also continue to sign-in by using their password; a TAP doesnΓÇÖt replace a userΓÇÖs password. +
+### User management of Temporary Access Pass
+
+Users managing their security information at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo) will see an entry for the Temporary Access Pass. If a user does not have any other registered methods they will be presented a banner at the top of the screen requesting them to add a new sign-in method. Users can additionally view the TAP expiration time, and delete the TAP if no longer needed.
+
+![Screenshot of how users can manage a Temporary Access Pass in My Security Info.](./media/how-to-authentication-temporary-access-pass/tap-my-security-info.png)
+
+### Windows device setup
+Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello For Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state:
+- During Azure AD Join setup, users can authenticate with a TAP (no password required) and setup Windows Hello for Business.
+- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business.
+- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to setup Windows Hello for Business.
+
+![Screenshot of how to enter Temporary Access Pass when setting up Windows 10.](./media/how-to-authentication-temporary-access-pass/windows-10-tap.png)
+ ### Passwordless phone sign-in Users can also use their Temporary Access Pass to register for Passwordless phone sign-in directly from the Authenticator app. For more information, see [Add your work or school account to the Microsoft Authenticator app](https://support.microsoft.com/account-billing/add-your-work-or-school-account-to-the-microsoft-authenticator-app-43a73ab5-b4e8-446d-9e54-2a4cb8e4e93c).
-![Screenshot of how to enter a Temporary Access Pass using work or school account](./media/how-to-authentication-temporary-access-pass/enter-work-school.png)
+![Screenshot of how to enter a Temporary Access Pass using work or school account.](./media/how-to-authentication-temporary-access-pass/enter-work-school.png)
### Guest access
Users need to reauthenticate with different authentication methods after the Tem
Under the **Authentication methods** for a user, the **Detail** column shows when the Temporary Access Pass expired. You can delete an expired Temporary Access Pass using the following steps: 1. In the Azure AD portal, browse to **Users**, select a user, such as *Tap User*, then choose **Authentication methods**.
-1. On the right-hand side of the **Temporary Access Pass (Preview)** authentication method shown in the list, select **Delete**.
+1. On the right-hand side of the **Temporary Access Pass** authentication method shown in the list, select **Delete**.
You can also use PowerShell:
Remove-MgUserAuthenticationTemporaryAccessPassMethod -UserId user3@contoso.com -
- A user can only have one Temporary Access Pass. The passcode can be used during the start and end time of the Temporary Access Pass. - If the user requires a new Temporary Access Pass:
- - If the existing Temporary Access Pass is valid, the admin needs to delete the existing Temporary Access Pass and create a new pass for the user.
+ - If the existing Temporary Access Pass is valid, the admin can create a new Temporary Access Pass which will override the existing valid Temporary Access Pass.
- If the existing Temporary Access Pass has expired, a new Temporary Access Pass will override the existing Temporary Access Pass. For more information about NIST standards for onboarding and recovery, see [NIST Special Publication 800-63A](https://pages.nist.gov/800-63-3/sp800-63a.html#sec4).
For more information about NIST standards for onboarding and recovery, see [NIST
Keep these limitations in mind: - When using a one-time Temporary Access Pass to register a Passwordless method such as FIDO2 or Phone sign-in, the user must complete the registration within 10 minutes of sign-in with the one-time Temporary Access Pass. This limitation does not apply to a Temporary Access Pass that can be used more than once.-- Temporary Access Pass is in public preview and currently not available in Azure for US Government. - Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration. -- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE), Autopilot, or to deploy Windows Hello for Business.
+- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.
## Troubleshooting
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
Title: Frequently asked questions (FAQs) about CloudKnox Permissions Management
-description: Frequently asked questions (FAQs) about CloudKnox Permissions Management.
+ Title: Frequently asked questions (FAQs) about Permissions Management
+description: Frequently asked questions (FAQs) about Permissions Management.
# Frequently asked questions (FAQs) > [!IMPORTANT]
-> CloudKnox Permissions Management (CloudKnox) is currently in PREVIEW.
+> Entra Permissions Management is currently in PREVIEW.
> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here. > [!NOTE]
-> The CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-This article answers frequently asked questions (FAQs) about CloudKnox Permissions Management (CloudKnox).
+This article answers frequently asked questions (FAQs) about Permissions Management.
-## What's CloudKnox Permissions Management?
+## What's Permissions Management?
-CloudKnox is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). CloudKnox detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
-## What are the prerequisites to use CloudKnox?
+## What are the prerequisites to use Permissions Management?
-CloudKnox supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use CloudKnox.
+Permissions Management supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use Permissions Management.
-## Can a customer use CloudKnox if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
+## Can a customer use Permissions Management if they have other identities with access to their IaaS platform that aren't yet in Azure AD (for example, if part of their business has Okta or AWS Identity & Access Management (IAM))?
Yes, a customer can detect, mitigate, and monitor the risk of 'backdoor' accounts that are local to AWS IAM, GCP, or from other identity providers such as Okta or AWS IAM.
-## Where can customers access CloudKnox?
+## Where can customers access Permissions Management?
-Customers can access the CloudKnox interface with a link from the Azure AD extension in the Azure portal.
+Customers can access the Permissions Management interface with a link from the Azure AD extension in the Azure portal.
-## Can non-cloud customers use CloudKnox on-premises?
+## Can non-cloud customers use Permissions Management on-premises?
-No, CloudKnox is a hosted cloud offering.
+No, Permissions Management is a hosted cloud offering.
-## Can non-Azure customers use CloudKnox?
+## Can non-Azure customers use Permissions Management?
-Yes, non-Azure customers can use our solution. CloudKnox is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+Yes, non-Azure customers can use our solution. Permissions Management is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
-## Is CloudKnox available for tenants hosted in the European Union (EU)?
+## Is Permissions Management available for tenants hosted in the European Union (EU)?
-No, the CloudKnox Permissions Management (CloudKnox) PREVIEW is currently not available for tenants hosted in the European Union (EU).
+No, the Permissions Management Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does CloudKnox provide?
+## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
-CloudKnox complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while CloudKnox allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
-## What languages does CloudKnox support?
+## What languages does Permissions Management support?
-CloudKnox currently supports English.
+Permissions Management currently supports English.
-## What public cloud infrastructures are supported by CloudKnox?
+## What public cloud infrastructures are supported by Permissions Management?
-CloudKnox currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
+Permissions Management currently supports the three major public clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
-## Does CloudKnox support hybrid environments?
+## Does Permissions Management support hybrid environments?
-CloudKnox currently doesn't support hybrid environments.
+Permissions Management currently doesn't support hybrid environments.
-## What types of identities are supported by CloudKnox?
+## What types of identities are supported by Permissions Management?
-CloudKnox supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
+Permissions Management supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
-<!## Is CloudKnox General Data Protection Regulation (GDPR) compliant?
+<!## Is Permissions Management General Data Protection Regulation (GDPR) compliant?
-CloudKnox is currently not GDPR compliant.>
+Permissions Management is currently not GDPR compliant.>
-## Is CloudKnox available in Government Cloud?
+## Is Permissions Management available in Government Cloud?
-No, CloudKnox is currently not available in Government clouds.
+No, Permissions Management is currently not available in Government clouds.
-## Is CloudKnox available for sovereign clouds?
+## Is Permissions Management available for sovereign clouds?
-No, CloudKnox is currently not available in sovereign Clouds.
+No, Permissions Management is currently not available in sovereign Clouds.
-## How does CloudKnox collect insights about permissions usage?
+## How does Permissions Management collect insights about permissions usage?
-CloudKnox has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
+Permissions Management has a data collector that collects access permissions assigned to various identities, activity logs, and resources metadata. This gathers full visibility into permissions granted to all identities to access the resources and details on usage of granted permissions.
-## How does CloudKnox evaluate cloud permissions risk?
+## How does Permissions Management evaluate cloud permissions risk?
-CloudKnox offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
+Permissions Management offers granular visibility into all identities and their permissions granted versus used, across cloud infrastructures to uncover any action performed by any identity on any resource. This isn't limited to just user identities, but also workload identities such as virtual machines, access keys, containers, and scripts. The dashboard gives an overview of permission profile to locate the riskiest identities and resources.
## What is the Permissions Creep Index? The Permissions Creep Index (PCI) is a quantitative measure of risk associated with an identity or role determined by comparing permissions granted versus permissions exercised. It allows users to instantly evaluate the level of risk associated with the number of unused or over-provisioned permissions across identities and resources. It measures how much damage identities can cause based on the permissions they have.
-## How can customers use CloudKnox to delete unused or excessive permissions?
+## How can customers use Permissions Management to delete unused or excessive permissions?
-CloudKnox allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
+Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed.
-## How can customers grant permissions on-demand with CloudKnox?
+## How can customers grant permissions on-demand with Permissions Management?
For any break-glass or one-off scenarios where an identity needs to perform a specific set of actions on a set of specific resources, the identity can request those permissions on-demand for a limited period with a self-service workflow. Customers can either use the built-in workflow engine or their IT service management (ITSM) tool. The user experience is the same for any identity type, identity source (local, enterprise directory, or federated) and cloud.
For any break-glass or one-off scenarios where an identity needs to perform a sp
Just-in-time (JIT) access is a method used to enforce the principle of least privilege to ensure identities are given the minimum level of permissions to perform the task at hand. Permissions on-demand are a type of JIT access that allows the temporary elevation of permissions, enabling identities to access resources on a by-request, timed basis.
-## How can customers monitor permissions usage with CloudKnox?
+## How can customers monitor permissions usage with Permissions Management?
-Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their CloudKnox dashboard where they can see how the PCI of each identity or resource is evolving over time.
+Customers only need to track the evolution of their Permission Creep Index to monitor permissions usage. They can do this in the "Analytics" tab in their Permissions Management dashboard where they can see how the PCI of each identity or resource is evolving over time.
## Can customers generate permissions usage reports?
-Yes, CloudKnox has various types of system report available that capture specific data sets. These reports allow customers to:
+Yes, Permissions Management has various types of system report available that capture specific data sets. These reports allow customers to:
- Make timely decisions. - Analyze usage trends and system/user performance. - Identify high-risk areas. For information about permissions usage reports, see [Generate and download the Permissions analytics report](product-permissions-analytics-reports.md).
-## Does CloudKnox integrate with third-party ITSM (Information Technology Security Management) tools?
+## Does Permissions Management integrate with third-party ITSM (Information Technology Security Management) tools?
-CloudKnox integrates with ServiceNow.
+Permissions Management integrates with ServiceNow.
+## How is Permissions Management being deployed?
-## How is CloudKnox being deployed?
+Customers with Global Admin role have first to onboard Permissions Management on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
-Customers with Global Admin role have first to onboard CloudKnox on their Azure AD tenant, and then onboard their AWS accounts, GCP projects, and Azure subscriptions. More details about onboarding can be found in our product documentation.
-
-## How long does it take to deploy CloudKnox?
+## How long does it take to deploy Permissions Management?
It depends on each customer and how many AWS accounts, GCP projects, and Azure subscriptions they have.
-## Once CloudKnox is deployed, how fast can I get permissions insights?
+## Once Permissions Management is deployed, how fast can I get permissions insights?
Once fully onboarded with data collection set up, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
-## Is CloudKnox collecting and storing sensitive personal data?
+## Is Permissions Management collecting and storing sensitive personal data?
-No, CloudKnox doesn't have access to sensitive personal data.
+No, Permissions Management doesn't have access to sensitive personal data.
-## Where can I find more information about CloudKnox?
+## Where can I find more information about Permissions Management?
You can read our blog and visit our web page. You can also get in touch with your Microsoft point of contact to schedule a demo. ## Resources - [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog)-- [CloudKnox Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)-
+- [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)
## Next steps -- For an overview of CloudKnox, see [What's CloudKnox Permissions Management?](overview.md).-- For information on how to onboard CloudKnox in your organization, see [Enable CloudKnox in your organization](onboard-enable-tenant.md).
+- For an overview of Permissions Management, see [What's Permissions Management Permissions Management?](overview.md).
+- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory Scenario Desktop Acquire Token Device Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-device-code-flow.md
Title: Acquire a token to call a web API using device code flow (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using device code flow -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Integrated Windows Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-integrated-windows-authentication.md
Title: Acquire a token to call a web API using integrated Windows authentication (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using integrated Windows authentication -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-interactive.md
Title: Acquire a token to call a web API interactively (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app interactively -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Username Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-username-password.md
Title: Acquire a token to call a web API using username and password (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using username and password. -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
Title: Acquire a token to call a web API using web account manager (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app using web account manager -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Title: Acquire a token to call a web API (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app -+
Last updated 08/25/2021-+ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
let accounts = await msalTokenCache.getAllAccounts();
const tokenRequest = { code: response["authorization_code"],
- codeVerifier: verifier // PKCE Code Verifier
+ codeVerifier: verifier, // PKCE Code Verifier
redirectUri: "your_redirect_uri", scopes: ["User.Read"], };
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
Minimize the usage of the 'match' operator in rules as much as possible. Instead
It's better to use rules like: -- `user.city -contains "ago,"`-- `user.city -startswith "Lag,"`
+- `user.city -contains "ago"`
+- `user.city -startswith "Lag"`
Or, best of all:
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
1. Select **Next: Configuration** to move on to the Configuration tab.
+1. On the Configuration tab, enter the following information:
+ ![Azure Active Directory - Create a tenant page - configuration tab ](media/active-directory-access-create-new-tenant/azure-ad-create-new-tenant.png)
-1. On the Configuration tab, enter the following information:
-
- Type _Contoso Organization_ into the **Organization name** box. - Type _Contosoorg_ into the **Initial domain name** box.
active-directory Security Operations Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-devices.md
You can create an alert that notifies appropriate administrators when a device i
``` Sign-in logs
-| where ResourceDisplayName == ΓÇ£Device Registration ServiceΓÇ¥
+| where ResourceDisplayName == "Device Registration Service"
-| where conditionalAccessStatus ==ΓÇ¥successΓÇ¥
+| where conditionalAccessStatus == "success"
-| where AuthenticationRequirement <> ΓÇ£multiFactorAuthenticationΓÇ¥
+| where AuthenticationRequirement <> "multiFactorAuthentication"
``` You can also use [Microsoft Intune to set and monitor device compliance policies](/mem/intune/protect/device-compliance-get-started).
It might not be possible to block access to all cloud and software-as-a-service
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant ==false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
+| Sign-ins by non-compliant devices| High| Sign-in logs| DeviceDetail.isCompliant == false| If requiring sign-in from compliant devices, alert when:<br><li> any sign in by non-compliant devices.<li> any access without MFA or a trusted location.<p>If working toward requiring devices, monitor for suspicious sign-ins.<br>[Azure Sentinel template](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/SigninLogs/SuspiciousSignintoPrivilegedAccount.yaml) |
| Sign-ins by unknown devices| Low| Sign-in logs| <li>DeviceDetail is empty<li>Single factor authentication<li>From a non-trusted location| Look for: <br><li>any access from out of compliance devices.<li>any access without MFA or trusted location |
It might not be possible to block access to all cloud and software-as-a-service
``` SigninLogs
-| where DeviceDetail.isCompliant ==false
+| where DeviceDetail.isCompliant == false
-| where conditionalAccessStatus == ΓÇ£successΓÇ¥
+| where conditionalAccessStatus == "success"
```
Attackers who have compromised a userΓÇÖs device may retrieve the [BitLocker](/w
| What to monitor| Risk Level| Where| Filter/sub-filter| Notes | | - |- |- |- |- |
-| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker keyΓÇ¥| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
+| Key retrieval| Medium| Audit logs| OperationName == "Read BitLocker key"| Look for <br><li>key retrieval`<li> other anomalous behavior by users retrieving keys. |
In LogAnalytics create a query such as
In LogAnalytics create a query such as
``` AuditLogs
-| where OperationName == "Read BitLocker keyΓÇ¥
+| where OperationName == "Read BitLocker key"
``` ## Device administrator roles
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Welcome to what's new in Azure Active Directory application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## May 2022
+
+### New articles
+
+- [My Apps portal overview](myapps-overview.md)
+
+### Updated articles
+
+- [Tutorial: Configure Datawiza with Azure Active Directory for secure hybrid access](datawiza-with-azure-ad.md)
+- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
+- [Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication](migrate-okta-federation-to-azure-active-directory.md)
+- [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)
+ ## March 2022 ### New articles
active-directory Pim Resource Roles Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md
Title: Configure security alerts for Azure resource roles in Privileged Identity Management - Azure Active Directory | Microsoft Docs
+ Title: Configure security alerts for Azure roles in Privileged Identity Management - Azure Active Directory | Microsoft Docs
description: Learn how to configure security alerts for Azure resource roles in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
na Previously updated : 10/07/2021 Last updated : 06/03/2022
-# Configure security alerts for Azure resource roles in Privileged Identity Management
+# Configure security alerts for Azure roles in Privileged Identity Management
Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your Azure Active Directory (Azure AD) organization. When an alert is triggered, it shows up on the Alerts page.
Select an alert to see a report that lists the users or roles that triggered the
## Alerts
-| Alert | Severity | Trigger | Recommendation |
-| | | | |
-| **Too many owners assigned to a resource** |Medium |Too many users have the owner role. |Review the users in the list and reassign some to less privileged roles. |
-| **Too many permanent owners assigned to a resource** |Medium |Too many users are permanently assigned to a role. |Review the users in the list and re-assign some to require activation for role use. |
-| **Duplicate role created** |Medium |Multiple roles have the same criteria. |Use only one of these roles. |
+Alert | Severity | Trigger | Recommendation
+ | | |
+**Too many owners assigned to a resource** | Medium | Too many users have the owner role. | Review the users in the list and reassign some to less privileged roles.
+**Too many permanent owners assigned to a resource** | Medium | Too many users are permanently assigned to a role. | Review the users in the list and re-assign some to require activation for role use.
+**Duplicate role created** | Medium | Multiple roles have the same criteria. | Use only one of these roles.
+**Roles are being assigned outside of Privileged Identity Management (Preview)** | High | A role is managed directly through the Azure IAM resource blade or the Azure Resource Manager API | Review the users in the list and remove them from privileged roles assigned outside of Privilege Identity Management.
+
+> [!NOTE]
+> During the public preview of the **Roles are being assigned outside of Privileged Identity Management (Preview)** alert, Microsoft supports only permissions that are assigned at the subscription level.
### Severity
active-directory Blinq Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Navigate to [Blinq Admin Console](https://dash.blinq.me) in a separate browser tab. 1. If you aren't logged in to Blinq you will need to do so.
-1. Click on your workspace in the top left corner of the screen.
-1. In the dropdown click **Settings**.
+1. Click on your workspace in the top left hand corner of the screen and select **Settings** in the dropdown menu.
+
+ [![Screenshot of the Blinq settings option.](media/blinq-provisioning-tutorial/blinq-settings.png)](media/blinq-provisioning-tutorial/blinq-settings.png#lightbox)
+ 1. Under the **Integrations** page you should see **Team Card Provisioning** which contains a URL and Token. You will need to generate the token by clicking **Generate**. Copy the **URL** and **Token**. The URL and the Token are to be inserted into the **Tenant URL*** and **Secret Token** field in the Azure portal respectively.
+ [![Screenshot of the Blinq integration page.](media/blinq-provisioning-tutorial/blinq-integrations-page.png)](media/blinq-provisioning-tutorial/blinq-integrations-page.png#lightbox)
+ ## Step 3. Add Blinq from the Azure AD application gallery Add Blinq from the Azure AD application gallery to start managing provisioning to Blinq. If you have previously setup Blinq for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the
### Additional prerequisites
+> [!NOTE]
+> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service is not supported.
+ The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations: * Only linux node clusters and node pools are supported.
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
You must ensure that:
* The secret is created in the `kube-system` namespace. ```yaml
-apiVerison: v1
+apiVersion: v1
kind: Secret metadata: name: custom-ca-trust-secret
For more information on AKS security best practices, see [Best practices for clu
[az-extension-update]: /cli/azure/extension#az-extension-update [az-feature-list]: /cli/azure/feature#az-feature-list [az-feature-register]: /cli/azure/feature#az-feature-register
-[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-provider-register]: /cli/azure/provider#az-provider-register
aks Deployment Center Launcher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deployment-center-launcher.md
Title: Deployment Center for Azure Kubernetes description: Deployment Center in Azure DevOps simplifies setting up a robust Azure DevOps pipeline for your application-+ Last updated 07/12/2019-+ # Deployment Center for Azure Kubernetes
aks Keda Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md
However, these external scalers aren't supported as part of the add-on and rely
<!-- LINKS - external --> [keda-scalers]: https://keda.sh/docs/scalers/ [keda-metrics]: https://keda.sh/docs/latest/operate/prometheus/
-[keda-event-docs]: https://keda.sh/docs/latest/operate/kubernetes-events/
+[keda-event-docs]: https://keda.sh/docs/2.7/operate/events/
[keda-sample]: https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Whilst AKS customers are able to route egress traffic through an Azure Load Balancer, there are limitations on the amount of outbound flows of traffic that is possible.
-Azure NAT Gateway allows up to 64,000 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
+Azure NAT Gateway allows up to 64,512 outbound UDP and TCP traffic flows per IP address with a maximum of 16 IP addresses.
This article will show you how to create an AKS cluster with a Managed NAT Gateway for egress traffic.
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM provides the following capabilities and features:
- Define and execute fine grained access control policies for services. - Monitor and debug services using observability and insights into application metrics. - Integrate with external certificate management.-- Integrates with existing ingress solutions such as the [Azure Gateway Ingress Controller][agic], [NGINX][nginx], and [Contour][contour]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx].
+- Integrates with existing ingress solutions such as [NGINX][nginx], [Contour][contour], and [Web Application Routing][web-app-routing]. For more details on how ingress works with OSM, see [Using Ingress to manage external access to services within the cluster][osm-ingress]. For an example on integrating OSM with Contour for ingress, see [Ingress with Contour][osm-contour]. For an example on integrating OSM with ingress controllers that use the `networking.k8s.io/v1` API, such as NGINX, see [Ingress with Kubernetes Nginx Ingress Controller][osm-nginx]. For more details on using Web Application Routing, which automatically integrates with OSM, see [Web Application Routing][web-app-routing].
## Example scenarios
After enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep t
[osm-onboard-app]: https://release-v1-0.docs.openservicemesh.io/docs/guides/app_onboarding/ [ip-tables-redirection]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/ [global-exclusion]: https://docs.openservicemesh.io/docs/guides/traffic_management/iptables_redirection/#global-outbound-ip-range-exclusions
-[agic]: ../application-gateway/ingress-controller-overview.md
[nginx]: https://github.com/kubernetes/ingress-nginx [contour]: https://projectcontour.io/ [osm-ingress]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/ingress/ [osm-contour]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_contour [osm-nginx]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx
+[web-app-routing]: web-app-routing.md
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
The Open Service Mesh (OSM) add-on integrates with features provided by Azure as
## Ingress
-Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with either [NGINX ingress][osm-nginx] or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, are not covered by the [AKS support policy][aks-support-policy].
+Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM, including NGINX ingress and Contour ingress, are not covered by the [AKS support policy][aks-support-policy].
Using [Azure Gateway Ingress Controller (AGIC)][agic] for ingress with OSM is not supported and not recommended.
OSM has several types of certificates it uses to operate on your AKS cluster. OS
[osm-cert-manager]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-cert-manager [open-source-integrations]: open-service-mesh-integrations.md#additional-open-source-integrations [osm-traffic-management-example]: https://github.com/MicrosoftDocs/azure-docs/pull/81085/files
-[osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer
+[osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer
+[web-app-routing]: web-app-routing.md
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
az keyvault secret set --vault-name MyAKSGMSAVault --name "GMSADomainUserCred" -
> [!NOTE] > Use the Fully Qualified Domain Name for the Domain rather than the Partially Qualified Domain Name that may be used on internal networks.
+>
+> The above command escapes the `value` parameter for running the Azure CLI on a Linux shell. When running the Azure CLI command on Windows PowerShell, you don't need to escape characters in the `value` parameter.
## Optional: Use a custom VNET with custom DNS
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use KMS etcd encryption in Azure Kubernetes Service (AKS) (Preview)
description: Learn how to use kms etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 04/11/2022 Last updated : 06/06/2022
The following limitations apply when you integrate KMS etcd encryption with AKS:
* Changing of key ID, including key name and key version. * Deletion of the key, Key Vault, or the associated identity. * KMS etcd encryption doesn't work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity isn't available until cluster creation, thus there's a cycle dependency.
-* Using Azure Key Vault with PrivateLink enabled.
* Using more than 2000 secrets in a cluster.
-* Managed HSM Support
* Bring your own (BYO) Azure Key Vault from another tenant. - ## Create a KeyVault and key > [!WARNING]
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/add-api-manually.md
Test the operation in the Azure portal. You can also test it in the **Developer
This section shows how to add a wildcard operation. A wildcard operation lets you pass an arbitrary value with an API request. Instead of creating separate GET operations as shown in the previous sections, you could create a wildcard GET operation.
+> [!CAUTION]
+> Use care when configuring a wildcard operation. This configuration may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#improper-assets-management).
+ ### Add the operation 1. Select the API you created in the previous step.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
Previously updated : 03/04/2022 Last updated : 06/03/2022
This article provides a reference for API Management access restriction policies
## <a name="AccessRestrictionPolicies"></a> Access restriction policies - [Check HTTP header](#CheckHTTPHeader) - Enforces existence and/or value of an HTTP header.
+- [Get authorization context](#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
- [Limit call rate by subscription](#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis. - [Limit call rate by key](#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis. - [Restrict caller IPs](#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges. - [Set usage quota by subscription](#SetUsageQuota) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. - [Set usage quota by key](#SetUsageQuotaByKey) - Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis.-- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter.
+- [Validate JWT](#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP header or a specified query parameter.
- [Validate client certificate](#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims. > [!TIP]
Use the `check-header` policy to enforce that a request has a specified HTTP hea
| -- | - | -- | - | | failed-check-error-message | Error message to return in the HTTP response body if the header doesn't exist or has an invalid value. This message must have any special characters properly escaped. | Yes | N/A | | failed-check-httpcode | HTTP Status code to return if the header doesn't exist or has an invalid value. | Yes | N/A |
-| header-name | The name of the HTTP Header to check. | Yes | N/A |
+| header-name | The name of the HTTP header to check. | Yes | N/A |
| ignore-case | Can be set to True or False. If set to True case is ignored when the header value is compared against the set of acceptable values. | Yes | N/A | ### Usage
This policy can be used in the following policy [sections](./api-management-howt
- **Policy scopes:** all scopes
+## <a name="GetAuthorizationContext"></a> Get authorization context
+
+Use the `get-authorization-context` policy to get the authorization context of a specified [authorization](authorizations-overview.md) (preview) configured in the API Management instance.
+
+The policy fetches and stores authorization and refresh tokens from the configured authorization provider.
+
+If `identity-type=jwt` is configured, a JWT token is required to be validated. The audience of this token must be https://azure-api.net/authorization-manager.
+++
+### Policy statement
+
+```xml
+<get-authorization-context
+ provider-id="authorization provider id"
+ authorization-id="authorization id"
+ context-variable-name="variable name"
+ identity-type="managed | jwt"
+ identity="JWT bearer token"
+ ignore-error="true | false" />
+```
+
+### Examples
+
+#### Example 1: Get token back
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+#### Example 2: Get token back with dynamically set attributes
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationProviderId"))"
+ authorization-id="@(context.Request.Url.Query.GetValueOrDefault("authorizationId"))" context-variable-name="auth-context"
+ ignore-error="false"
+ identity-type="managed" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+#### Example 3: Attach the token to the backend call
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="managed"
+ ignore-error="false" />
+<!-- Attach the token to the backend call -->
+<set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+</set-header>
+```
+
+#### Example 4: Get token from incoming request and return token
+
+```xml
+<!-- Add to inbound policy. -->
+<get-authorization-context
+ provider-id="github-01"
+ authorization-id="auth-01"
+ context-variable-name="auth-context"
+ identity-type="jwt"
+ identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
+ ignore-error="false" />
+<!-- Return the token -->
+<return-response>
+ <set-status code="200" />
+ <set-body template="none">@(((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</set-body>
+</return-response>
+```
+
+### Elements
+
+| Name | Description | Required |
+| -- | - | -- |
+| get-authorization-context | Root element. | Yes |
+
+### Attributes
+
+| Name | Description | Required | Default |
+|||||
+| provider-id | The authorization provider resource identifier. | Yes | |
+| authorization-id | The authorization resource identifier. | Yes | |
+| context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | |
+| identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed |
+| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: https://azure-api.net/authorization-manager <br> - `oid`: Permission object id <br> - `tid`: Permission tenant id | No | |
+| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false |
+
+### Authorization object
+
+The Authorization context variable receives an object of type `Authorization`.
+
+```c#
+class Authorization
+{
+ public string AccessToken { get; }
+ public IReadOnlyDictionary<string, object> Claims { get; }
+}
+```
+
+| Property Name | Description |
+| -- | -- |
+| AccessToken | Bearer access token to authorize a backend HTTP request. |
+| Claims | Claims returned from the authorization serverΓÇÖs token response API (see [RFC6749#section-5.1](https://datatracker.ietf.org/doc/html/rfc6749#section-5.1)). |
+
+### Usage
+
+This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
+
+- **Policy sections:** inbound
+
+- **Policy scopes:** all scopes
++ ## <a name="LimitCallRate"></a> Limit call rate by subscription The `rate-limit` policy prevents API usage spikes on a per subscription basis by limiting the call rate to a specified number per a specified time period. When the call rate is exceeded, the caller receives a `429 Too Many Requests` response status code.
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from either a specified HTTP Header or a specified query parameter.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from either a specified HTTP header or a specified query parameter.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
api-management Api Management Cross Domain Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-cross-domain-policies.md
Use the `cross-domain` policy to make the API accessible from Adobe Flash and Mi
|-|--|--| |cross-domain|Root element. Child elements must conform to the [Adobe cross-domain policy file specification](https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/CrossDomain_PolicyFile_Specification.pdf).|Yes|
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
This example demonstrates how to support [pre-flight requests](https://developer
|expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A| |header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or `expose-headers` if the section is present.|N/A|
+> [!CAUTION]
+> Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### Attributes |Name|Description|Required|Default|
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
Use this procedure to create and update a release.
The notes you specify appear in the change log. You can see them in the output of the previous command.
-1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later using the [az apim api release update](/cli/azure/apim/api/release#az_apim_api_release_update) command:
+1. When you create a release, the `--notes` parameter is optional. You can add or change the notes later using the [az apim api release update](/cli/azure/apim/api/release#az-apim-api-release-update) command:
```azurecli az apim api release update --resource-group apim-hello-word-resource-group \
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
In this tutorial, you learn how to:
1. Select **Create** to create your new product.
+> [!CAUTION]
+> Use care when configuring a product that doesn't require a subscription. This configuration may be overly permissive and may make the product's APIs more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ ### [Azure CLI](#tab/azure-cli) To begin using Azure CLI:
You can specify various values for your product:
| `--subscriptions-limit` | Optionally, limit the count of multiple simultaneous subscriptions.| | `--legal-terms` | You can include the terms of use for the product, which subscribers must accept to use the product. |
+> [!CAUTION]
+> Use care when configuring a product that doesn't require a subscription. This configuration may be overly permissive and may make the product's APIs more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ To see your current products, use the [az apim product list](/cli/azure/apim/product#az-apim-product-list) command: ```azurecli
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
az apim nv delete --resource-group apim-hello-word-resource-group \
The examples in this section use the named values shown in the following table. | Name | Value | Secret |
-|--|-|--||
+|--|-|--|
| ContosoHeader | `TrackingId` | False | | ContosoHeaderValue | ΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇóΓÇó | True | | ExpressionProperty | `@(DateTime.Now.ToString())` | False |
+| ContosoHeaderValue2 | `This is a header value.` | False |
To use a named value in a policy, place its display name inside a double pair of braces like `{{ContosoHeader}}`, as shown in the following example:
If you look at the outbound [API trace](api-management-howto-api-inspector.md) f
:::image type="content" source="media/api-management-howto-properties/api-management-api-inspector-trace.png" alt-text="API Inspector trace":::
+String interpolation can also be used with named values.
+
+```xml
+<set-header name="CustomHeader" exists-action="override">
+ <value>@($"The URL encoded value is {System.Net.WebUtility.UrlEncode("{{ContosoHeaderValue2}}")}")</value>
+</set-header>
+```
+
+The value for `CustomHeader` will be `The URL encoded value is This+is+a+header+value.`.
+ > [!CAUTION] > If a policy references a secret in Azure Key Vault, the value from the key vault will be visible to users who have access to subscriptions enabled for [API request tracing](api-management-howto-api-inspector.md).
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
## [Access restriction policies](api-management-access-restriction-policies.md) - [Check HTTP header](api-management-access-restriction-policies.md#CheckHTTPHeader) - Enforces existence and/or value of an HTTP Header.
+- [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance.
- [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis. - [Limit call rate by key](api-management-access-restriction-policies.md#LimitCallRateByKey) - Prevents API usage spikes by limiting call rate, on a per key basis. - [Restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) - Filters (allows/denies) calls from specific IP addresses and/or address ranges.
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
API publishers can [create subscriptions](api-management-howto-create-subscripti
By default, a developer can only access a product or API by using a subscription key. Under certain scenarios, API publishers might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured access to certain APIs, configuring another mechanism to secure client access is recommended.
+> [!CAUTION]
+> Use care when configuring a product or an API that doesn't require a subscription. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
+ To disable the subscription requirement using the portal: * **Disable requirement for product** - Disable **Requires subscription** on the **Settings** page of the product.
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
+
+ Title: Create and use authorization in Azure API Management | Microsoft Docs
+description: Learn how to create and use an authorization in Azure API Management. An authorization manages authorization tokens to OAuth 2.0 backend services. The example uses GitHub as an identity provider.
++++ Last updated : 06/03/2022+++
+# Configure and use an authorization
+
+In this article, you learn how to create an [authorization](authorizations-overview.md) (preview) in API Management and call a GitHub API that requires an authorization token. The authorization code grant type will be used.
+
+Four steps are needed to set up an authorization with the authorization code grant type:
+
+1. Register an application in the identity provider (in this case, GitHub).
+1. Configure an authorization in API Management.
+1. Authorize with GitHub and configure access policies.
+1. Create an API in API Management and configure a policy.
+
+## Prerequisites
+
+- A GitHub account is required.
+- Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+- Enable a [managed identity](api-management-howto-use-managed-service-identity.md) for API Management in the API Management instance.
+
+## Step 1: Register an application in GitHub
+
+1. Sign in to GitHub.
+1. In your account profile, go to **Settings > Developer Settings > OAuth Apps > Register a new application**.
+
+
+ :::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub.":::
+ 1. Enter an **Application name** and **Homepage URL** for the application.
+ 1. Optionally, add an **Application description**.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager-test.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
+1. Select **Register application**.
+1. In the **General** page, copy the **Client ID**, which you'll use in a later step.
+1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
+
+ :::image type="content" source="media/authorizations-how-to/generate-secret.png" alt-text="Screenshot showing how to get client ID and client secret for the application in GitHub.":::
+
+## Step 2: Configure an authorization in API Management
+
+1. Sign into Azure portal and go to your API Management instance.
+1. In the left menu, select **Authorizations** > **+ Create**.
+
+ :::image type="content" source="media/authorizations-how-to/create-authorization.png" alt-text="Screenshot of creating an API Management authorization in the Azure portal.":::
+1. In the **Create authorization** window, enter the following settings, and select **Create**:
+
+ |Settings |Value |
+ |||
+ |**Provider name** | A name of your choice, such as *github-01* |
+ |**Identity provider** | Select **GitHub** |
+ |**Grant type** | Select **Authorization code** |
+ |**Client id** | Paste the value you copied earlier from the app registration |
+ |**Client secret** | Paste the value you copied earlier from the app registration |
+ |**Scope** | Set the scope to `User` |
+ |**Authorization name** | A name of your choice, such as *auth-01* |
+
+
+
+1. After the authorization provider and authorization are created, select **Next**.
+
+1. On the **Login** tab, select **Login with GitHub**. Before the authorization will work, it needs to be authorized at GitHub.
+
+ :::image type="content" source="media/authorizations-how-to/authorize-with-github.png" alt-text="Screenshot of logging into the GitHub authorization from the portal.":::
+
+## Step 3: Authorize with GitHub and configure access policies
+
+1. Sign in to your GitHub account if you're prompted to do so.
+1. Select **Authorize** so that the application can access the signed-in userΓÇÖs account.
+
+ :::image type="content" source="media/authorizations-how-to/consent-to-authorization.png" alt-text="Screenshot of consenting to authorize with Github.":::
+
+ After authorization, the browser is redirected to API Management and the window is closed. If prompted during redirection, select **Allow access**. In API Management, select **Next**.
+1. On the **Access policy** page, create an access policy so that API Management has access to use the authorization. Ensure that a managed identity is configured for API Management. [Learn more about managed identities in API Management](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity).
+
+1. Select **Managed identity** **+ Add members** and then select your subscription.
+1. In **Managed identity**, select **API Management service**, and then select the API Management instance that is used. Click **Select** and then **Complete**.
+
+ :::image type="content" source="media/authorizations-how-to/select-managed-identity.png" alt-text="Screenshot of selecting a managed identity to use the authorization.":::
+
+## Step 4: Create an API in API Management and configure a policy
+
+1. Sign into Azure portal and go to your API Management instance.
+1. In the left menu, select **APIs > + Add API**.
+1. Select **HTTP** and enter the following settings. Then select **Create**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *github* |
+ |**Web service URL** | https://api.github.com/users/ |
+ |**API URL suffix** | *github* |
+
+2. Navigate to the newly created API and select **Add Operation**. Enter the following settings and select **Save**.
+
+ |Setting |Value |
+ |||
+ |**Display name** | *getdata* |
+ |**URL** | /data |
+
+ :::image type="content" source="media/authorizations-how-to/add-operation.png" alt-text="Screenshot of adding a getdata operation to the API in the portal.":::
+
+1. In the **Inbound processing** section, select the (**</>**) (code editor) icon.
+1. Copy the following, and paste in the policy editor. Make sure the provider-id and authorization-id correspond to the names in step 2.3. Select **Save**.
+
+ ```xml
+ <policies>
+ <inbound>
+ <base />
+ <get-authorization-context provider-id="github-01" authorization-id="auth-01" context-variable-name="auth-context" identity-type="managed" ignore-error="false" />
+ <set-header name="Authorization" exists-action="override">
+ <value>@("Bearer " + ((Authorization)context.Variables.GetValueOrDefault("auth-context"))?.AccessToken)</value>
+ </set-header>
+ <rewrite-uri template="@(context.Request.Url.Query.GetValueOrDefault("username",""))" copy-unmatched-params="false" />
+ <set-header name="User-Agent" exists-action="override">
+ <value>API Management</value>
+ </set-header>
+ </inbound>
+ <backend>
+ <base />
+ </backend>
+ <outbound>
+ <base />
+ </outbound>
+ <on-error>
+ <base />
+ </on-error>
+ </policies>
+ ```
+
+ The policy to be used consists of four parts.
+
+ - Fetch an authorization token.
+ - Create an HTTP header with the fetched authorization token.
+ - Create an HTTP header with a `User-Agent` header (GitHub requirement). [Learn more](https://docs.github.com/rest/overview/resources-in-the-rest-api#user-agent-required)
+ - Because the incoming request to API Management will consist of a query parameter called *username*, add the username to the backend call.
+
+ > [!NOTE]
+ > The `get-authorization-context` policy references the authorization provider and authorization that were created earlier. [Learn more](api-management-access-restriction-policies.md#GetAuthorizationContext) about how to configure this policy.
+
+ :::image type="content" source="media/authorizations-how-to/policy-configuration-cropped.png" lightbox="media/authorizations-how-to/policy-configuration.png" alt-text="Screenshot of configuring policy in the portal.":::
+1. Test the API.
+ 1. On the **Test** tab, enter a query parameter with the name *username*.
+ 1. As value, enter the username that was used to sign into GitHub, or another valid GitHub username.
+ 1. Select **Send**.
+ :::image type="content" source="media/authorizations-how-to/test-api.png" alt-text="Screenshot of testing the API successfully in the portal.":::
+
+ A successful response returns user data from the GitHub API.
+
+## Next steps
+
+Learn more about [access restriction policies](api-management-access-restriction-policies.md).
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
+
+ Title: About OAuth 2.0 authorizations in Azure API Management | Microsoft Docs
+description: Learn about authorizations in Azure API Management, a feature that simplifies the process of managing OAuth 2.0 authorization tokens to APIs
+++ Last updated : 06/03/2022+++
+# Authorizations overview
+
+API Management authorizations (preview) simplify the process of managing authorization tokens to OAuth 2.0 backend services.
+By configuring any of the supported identity providers and creating an authorization using the standardized OAuth 2.0 flow, API Management can retrieve and refresh access tokens to be used inside of API management or sent back to a client.
+This feature enables APIs to be exposed with or without a subscription key, and the authorization to the backend service uses OAuth 2.0.
+
+Some example scenarios that will be possible through this feature are:
+
+- Citizen/low code developers using Power Apps or Power Automate can easily connect to SaaS providers that are using OAuth 2.0.
+- Unattended scenarios such as an Azure function using a timer trigger can utilize this feature to connect to a backend API using OAuth 2.0.
+- A marketing team in an enterprise company could use the same authorization for interacting with a social media platform using OAuth 2.0.
+- Exposing APIs in API Management as a custom connector in Logic Apps where the backend service requires OAuth 2.0 flow.
+- On behalf of a scenario where a service such as Dropbox or any other service protected by OAuth 2.0 flow is used by multiple clients.
+- Connect to different services that require OAuth 2.0 authorization using synthetic GraphQL in API Management.
+- Enterprise Application Integration (EAI) patterns using service-to-service authorization can use the client credentials grant type against backend APIs that use OAuth 2.0.
+- Single-page applications that only want to retrieve an access token to be used in a client's SDK against an API using OAuth 2.0.
+
+The feature consists of two parts, management and runtime:
+
+* The **management** part takes care of configuring identity providers, enabling the consent flow for the identity provider, and managing access to the authorizations.
++
+* The **runtime** part uses the [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) policy to fetch and store access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, the refresh token is used to try to fetch a new authorization and refresh token from the configured identity provider. If the call to the backend provider is successful, the new authorization token will be used, and both the authorization token and refresh token will be stored encrypted.
++
+ During the policy execution, access to the tokens is also validated using access policies.
++
+### Requirements
+
+- Managed system-assigned identity must be enabled for the API Management instance.
+- API Management instance must have outbound connectivity to internet on port `443` (HTTPS).
+
+### Limitations
+
+For public preview the following limitations exist:
+
+- Authorizations feature will be available in the Consumption tier in the coming weeks.
+- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.
+- Supported identity providers: Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify
+- Maximum configured number of authorization providers per API Management instance: 50
+- Maximum configured number of authorizations per authorization provider: 500
+- Maximum configured number of access policies per authorization: 100
+- Maximum requests per minute per authorization: 100
+- Authorization code PKCE flow with code challenge isn't supported.
+- Authorizations feature isn't supported on self-hosted gateways.
+- API documentation is not available yet. Please see [this](https://github.com/Azure/APIManagement-Authorizations) GitHub repository with samples.
+
+### Authorization providers
+
+Authorization provider configuration includes which identity provider and grant type are used. Each identity provider requires different configurations.
+
+* An authorization provider configuration can only have one grant type.
+* One authorization provider configuration can have multiple authorizations.
+
+The following identity providers are supported for public preview:
+
+- Azure AD, DropBox, Generic OAuth 2.0, GitHub, Google, LinkedIn, Spotify
++
+With the Generic OAuth 2.0 provider, other identity providers that support the standards of OAuth 2.0 flow can be used.
++
+### Authorizations
+
+To use an authorization provider, at least one *authorization* is required. The process of configuring an authorization differs based on the used grant type. Each authorization provider configuration only supports one grant type. For example, if you want to configure Azure AD to use both grant types, two authorization provider configurations are needed.
+
+**Authorization code grant type**
+
+Authorization code grant type is bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Read more about Authorization code grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1).
+
+**Client credentials grant type**
+
+Client credentials grant type isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Read more about Client Credentials grant type](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4).
++
+### Access policies
+Access policies determine which identities can use the authorization that the access policy is related to. The supported identities are managed identities, user identities, and service principals. The identities must belong to the same tenant as the API Management tenant.
+
+- **Managed identities** - System- or user-assigned identity for the API Management instance that is being used.
+- **User identities** - Users in the same tenant as the API Management instance.
+- **Service principals** - Applications in the same Azure AD tenant as the API Management instance.
+
+### Process flow for creating authorizations
+
+The following image shows the process flow for creating an authorization in API Management using the grant type authorization code. For public preview no API documentation is available. Please see [this](https://aka.ms/apimauthorizations/postmancollection) Postman collection.
+++
+1. Client sends a request to create an authorization provider.
+1. Authorization provider is created, and a response is sent back.
+1. Client sends a request to create an authorization.
+1. Authorization is created, and a response is sent back with the information that the authorization is not "connected".
+1. Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step.
+1. Response is returned with a login URL that should be used to start the consent flow.
+1. Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow.
+1. After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the identity provider.
+1. API Management uses the authorization code to fetch access and refresh tokens.
+1. API Management receives the tokens and encrypts them.
+1. API Management redirects to the provided URL from step 5.
+
+### Process flow for runtime
+
+The following image shows the process flow to fetch and store authorization and refresh tokens based on a configured authorization. After the tokens have been retrieved a call is made to the backend API.
++
+1. Client sends request to API Management instance.
+1. The policy [`get-authorization-context`](api-management-access-restriction-policies.md#GetAuthorizationContext) checks if the access token is valid for the current authorization.
+1. If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider.
+1. The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management.
+1. After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API.
+1. Response is returned to API Management.
+1. Response is returned to the client.
+
+### Error handling
+
+If acquiring the authorization context results in an error, the outcome depends on how the attribute `ignore-error` is configured in the policy `get-authorization-context`. If the value is set to `false` (default), an error with `500 Internal Server Error` will be returned. If the value is set to `true`, the error will be ignored and execution will proceed with the context variable set to `null`.
+
+If the value is set to `false`, and the on-error section in the policy is configured, the error will be available in the property `context.LastError`. By using the on-error section, the error that is sent back to the client can be adjusted. Errors from API Management can be caught using standard Azure alerts. Read more about [handling errors in policies](api-management-error-handling-policies.md).
+
+### Authorizations FAQ
+
+##### How can I provide feedback and influence the roadmap for this feature?
+
+Please use [this](https://aka.ms/apimauthorizations/feedback) form to provide feedback.
+
+##### How are the tokens stored in API Management?
+
+The access token and other secrets (for example, client secrets) are encrypted with an envelope encryption and stored in an internal, multitenant storage. The data are encrypted with AES-128 using a key that is unique per data; those keys are encrypted asymmetrically with a master certificate stored in Azure Key Vault and rotated every month.
+
+##### When are the access tokens refreshed?
+
+When the policy `get-authorization-context` is executed at runtime, API Management checks if the stored access token is valid. If the token has expired or is near expiry, API Management uses the refresh token to fetch a new access token and a new refresh token from the configured identity provider. If the refresh token has expired, an error is thrown, and the authorization needs to be reauthorized before it will work.
+
+##### What happens if the client secret expires at the identity provider?
+At runtime API Management can't fetch new tokens, and an error will occur.
+
+* If the authorization is of type authorization code, the client secret needs to be updated on authorization provider level.
+
+* If the authorization is of type client credentials, the client secret needs to be updated on authorizations level.
+
+##### Is this feature supported using API Management running inside a VNet?
+
+Yes, as long as API Management gateway has outbound internet connectivity on port `443`.
+
+##### What happens when an authorization provider is deleted?
+
+All underlying authorizations and access policies are also deleted.
+
+##### Are the access tokens cached by API Management?
+
+The access token is cached by the API management until 3 minutes before the token expiration time.
+
+##### What grant types are supported?
+
+For public preview, the Azure AD identity provider supports authorization code and client credentials.
+
+The other identity providers support authorization code. After public preview, more identity providers and grant types will be added.
+
+### Next steps
+
+- Learn how to [configure and use an authorization](authorizations-how-to.md).
+- See [reference](authorizations-reference.md) for supported identity providers in authorizations.
+- Use [policies]() together with authorizations.
+- Authorizations [samples](https://github.com/Azure/APIManagement-Authorizations) GitHub repository.
+- Learn more about OAuth 2.0:
+
+ * [OAuth 2.0 overview](https://aaronparecki.com/oauth-2-simplified/)
+ * [OAuth 2.0 specification](https://oauth.net/2/)
api-management Authorizations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-reference.md
+
+ Title: Reference for OAuth 2.0 authorizations - Azure API Management | Microsoft Docs
+description: Reference for identity providers supported in authorizations in Azure API Management. API Management authorizations manage OAuth 2.0 authorization tokens to APIs.
+++ Last updated : 05/02/2022+++
+# Authorizations reference
+This article is a reference for the supported identity providers in API Management [authorizations](authorizations-overview.md) (preview) and their configuration options.
+
+## Azure Active Directory
++
+**Supported grant types**: authorization code and client credentials
++
+### Authorization provider - Authorization code grant type
+
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
+| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
+| Resource URL | Yes | The resource to get authorization for. | |
+| Scopes | No | Scopes used for the authorization. Multiple scopes could be defined separate with a space, for example, "User.Read User.ReadBasic.All" | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+
+
+### Authorization provider - Client credentials code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Login URL | No | The Azure Active Directory login URL. | https://login.windows.net |
+| Tenant ID | No | The tenant ID of your Azure Active Directory application. | common |
+| Resource URL | Yes | The resource to get authorization for. | |
++
+### Authorization - Client credentials code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+
+
+
+## Google, LinkedIn, Spotify, Dropbox, GitHub
+
+**Supported grant types**: authorization code
+
+### Authorization provider - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+
+
+## Generic OAuth 2
+
+**Supported grant types**: authorization code
++
+### Authorization provider - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Provider name | Yes | Name of Authorization provider. | |
+| Client id | Yes | The id used to identify this application with the service provider. | |
+| Client secret | Yes | The shared secret used to authenticate this application with the service provider. ||
+| Authorization URL | No | The authorization endpoint URL. | |
+| Token URL | No | The token endpoint URL. | |
+| Refresh URL | No | The token refresh endpoint URL. | |
+| Scopes | No | Scopes used for the authorization. Depending on the identity provider, multiple scopes are separated by space or comma. Default for most identity providers is space. | |
++
+### Authorization - Authorization code grant type
+| Name | Required | Description | Default |
+|||||
+| Authorization name | Yes | Name of Authorization. | |
+
+## Next steps
+
+Learn more about [authorizations](authorizations-overview.md) and how to [create and use authorizations](authorizations-how-to.md)
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
+
+ Title: Mitigate OWASP API security top 10 in Azure API Management
+description: Learn how to protect against common API-based vulnerabilities, as identified by the OWASP API Security Top 10 threats, using Azure API Management.
+++ Last updated : 05/31/2022+++
+# Recommendations to mitigate OWASP API Security Top 10 threats using API Management
+
+The Open Web Application Security Project ([OWASP](https://owasp.org/about/)) Foundation works to improve software security through its community-led open source software projects, hundreds of chapters worldwide, tens of thousands of members, and by hosting local and global conferences.
+
+The OWASP [API Security Project](https://owasp.org/www-project-api-security/) focuses on strategies and solutions to understand and mitigate the unique *vulnerabilities and security risks of APIs*. In this article, we'll discuss recommendations to use Azure API Management to mitigate the top 10 API threats identified by OWASP.
+
+## Broken object level authorization
+
+API objects that aren't protected with the appropriate level of authorization may be vulnerable to data leaks and unauthorized data manipulation through weak object access identifiers. For example, an attacker could exploit an integer object identifier, which can be iterated.
+
+More information about this threat: [API1:2019 Broken Object Level Authorization](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa1-broken-object-level-authorization.md)
+
+### Recommendations
+
+* The best place to implement object level authorization is within the backend API itself. At the backend, the correct authorization decisions can be made at the request (or object) level, where applicable, using logic applicable to the domain and API. Consider scenarios where a given request may yield differing levels of detail in the response, depending on the requestor's permissions and authorization.
+
+* If a current vulnerable API can't be changed at the backend, then API Management could be used as a fallback. For example:
+
+ * Use a custom policy to implement object-level authorization, if it's not implemented in the backend.
+
+ * Implement a custom policy to map identifiers from request to backend and from backend to client, so that internal identifiers aren't exposed.
+
+ In these cases, the custom policy could be a [policy expression](api-management-policy-expressions.md) with a look-up (for example, a dictionary) or integration with another service through the [send request](api-management-advanced-policies.md#SendRequest) policy.
+
+* For GraphQL scenarios, enforce object-level authorization through the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy, using the `authorize` element.
+
+## Broken user authentication
+
+Authentication mechanisms are often implemented incorrectly or missing, allowing attackers to exploit implementation flaws to access data.
+
+More information about this threat: [API2:2019 Broken User Authentication](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa2-broken-user-authentication.md)
+
+### Recommendations
+
+Use API Management for user authentication and authorization:
+
+* **Authentication** - API Management supports the following [authentication methods](api-management-authentication-policies.md):
+
+ * [Basic authentication](api-management-authentication-policies.md#Basic) policy - Username and password credentials.
+
+ * [Subscription key](api-management-subscriptions.md) - A subscription key provides a similar level of security as basic authentication and may not be sufficient alone. If the subscription key is compromised, an attacker may get unlimited access to the system.
+
+ * [Client certificate](api-management-authentication-policies.md#ClientCertificate) policy - Using client certificates is more secure than basic credentials or subscription key, but it doesn't allow the flexibility provided by token-based authorization protocols such as OAuth 2.0.
+
+* **Authorization** - API Management supports a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the validity of an incoming OAuth 2.0 JWT access token based on information obtained from the OAuth identity provider's metadata endpoint. Configure the policy to check relevant token claims, audience, and expiration time. Learn more about protecting an API using [OAuth 2.0 authorization and Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+
+More recommendations:
+
+* Use [access restriction policies](api-management-access-restriction-policies.md) in API Management to increase security. For example, [call rate limiting](api-management-access-restriction-policies.md#LimitCallRate) slows down bad actors using brute force attacks to compromise credentials.
+
+* APIs should use TLS/SSL (transport security) to protect the credentials or tokens. Credentials and tokens should be sent in request headers and not as query parameters.
+
+* In the API Management [developer portal](api-management-howto-developer-portal.md), configure [Azure Active Directory](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) as the identity provider to increase the account security. The developer portal uses CAPTCHA to mitigate brute force attacks.
+
+### Related information
+
+* [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md)
+
+## Excessive data exposure
+
+Good API interface design is deceptively challenging. Often, particularly with legacy APIs that have evolved over time, the request and response interfaces contain more data fields than the consuming applications require.
+
+A bad actor could attempt to access the API directly (perhaps by replaying a valid request), or sniff the traffic between server and API. Analysis of the API actions and the data available could yield sensitive data to the attacker, which isn't surfaced to, or used by, the frontend application.
+
+More information about this threat: [API3:2019 Excessive Data Exposure](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa3-excessive-data-exposure.md)
+
+### Recommendations
+
+* The best approach to mitigating this vulnerability is to ensure that the external interfaces defined at the backend API are designed carefully and, ideally, independently of the data persistence. They should contain only the fields required by consumers of the API. APIs should be reviewed frequently, and legacy fields deprecated, then removed.
+
+ In API Management, use:
+ * [Revisions](api-management-revisions.md) to gracefully control nonbreaking changes, for example, the addition of a field to an interface. You may use revisions along with a versioning implementation at the backend.
+
+ * [Versions](api-management-versions.md) for breaking changes, for example, the removal of a field from an interface.
+
+* If it's not possible to alter the backend interface design and excessive data is a concern, use API Management [transformation policies](transform-api.md) to rewrite response payloads and mask or filter data. For example, [remove unneeded JSON properties](./policies/filter-response-content.md) from a response body.
+
+* [Response content validation](validation-policies.md#validate-content) in API Management can be used with an XML or JSON schema to block responses with undocumented properties or improper values. The policy also supports blocking responses exceeding a specified size.
+
+* Use the [validate status code](validation-policies.md#validate-status-code) policy to block responses with errors undefined in the API schema.
+
+* Use the [validate headers](validation-policies.md#validate-headers) policy to block responses with headers that aren't defined in the schema or don't comply to their definition in the schema. Remove unwanted headers with the [set header](api-management-transformation-policies.md#SetHTTPheader) policy.
+
+* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy to validate GraphQL requests, authorize access to specific query paths, and limit response size.
+
+## Lack of resources and rate limiting
+
+Lack of rate limiting may lead to data exfiltration or successful DDoS attacks on backend services, causing an outage for all consumers.
+
+More information about this threat: [API4:2019 Lack of resources and rate limiting](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa4-lack-of-resources-and-rate-limiting.md)
+
+### Recommendations
+
+* Use [rate limit](api-management-access-restriction-policies.md#LimitCallRate) (short-term) and [quota limit](api-management-access-restriction-policies.md#SetUsageQuota) (long-term) policies to control the allowed number of API calls or bandwidth per consumer.
+
+* Define strict request object definitions and their properties in the OpenAPI definition. For example, define the max value for paging integers, maxLength and regular expression (regex) for strings. Enforce those schemas with the [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies in API Management.
+
+* Enforce maximum size of the request with the [validate content](validation-policies.md#validate-content) policy.
+
+* Optimize performance with [built-in caching](api-management-howto-cache.md), thus reducing the consumption of CPU, memory, and networking resources for certain operations.
+
+* Enforce authentication for API calls (see [Broken user authentication](#broken-user-authentication)). Revoke access for abusive users. For example, deactivate the subscription key, block the IP address with the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy, or reject requests for a certain user claim from a [JWT token](api-management-access-restriction-policies.md#ValidateJWT).
+
+* Apply a [CORS](api-management-cross-domain-policies.md#CORS) policy to control the websites that are allowed to load the resources served through the API. To avoid overly permissive configurations, donΓÇÖt use wildcard values (`*`) in the CORS policy.
+
+* Minimize the time it takes a backend service to respond. The longer the backend service takes to respond, the longer the connection is occupied in API Management, therefore reducing the number of requests that can be served in a given timeframe.
+
+ * Define `timeout` in the [forward request](api-management-advanced-policies.md#ForwardRequest) policy.
+
+ * Use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy for GraphQL APIs and configure `max-depth` and `max-size` parameters.
+
+ * Limit the number of parallel backend connections with the [limit concurrency](api-management-advanced-policies.md#LimitConcurrency) policy.
+
+* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Azure DDoS Protection Service](../ddos-protection/ddos-protection-overview.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
+
+## Broken function level authorization
+
+Complex access control policies with different hierarchies, groups, and roles, and an unclear separation between administrative and regular functions lead to authorization flaws. By exploiting these issues, attackers gain access to other usersΓÇÖ resources or administrative functions.
+
+More information about this threat: [API5:2019 Broken function level authorization](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa5-broken-function-level-authorization.md)
+
+### Recommendations
+
+* By default, protect all API endpoints in API Management with [subscription keys](api-management-subscriptions.md).
+
+* Define a [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy and enforce required token claims. If certain operations require stricter claims enforcement, define extra `validate-jwt` policies for those operations only.
+
+* Use an Azure virtual network or Private Link to hide API endpoints from the internet. Learn more about [virtual network options](virtual-network-concepts.md) with API Management.
+
+* Don't define [wildcard API operations](add-api-manually.md#add-and-test-a-wildcard-operation) (that is, "catch-all" APIs with `*` as the path). Ensure that API Management only serves requests for explicitly defined endpoints, and requests to undefined endpoints are rejected.
+
+* Don't publish APIs with [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
+
+## Mass assignment
+
+If an API offers more fields than the client requires for a given action, an attacker may inject excessive properties to perform unauthorized operations on data. Attackers may discover undocumented properties by inspecting the format of requests and responses or other APIs, or guessing them. This vulnerability is especially applicable if you donΓÇÖt use strongly typed programming languages.
+
+More information about this threat: [API6:2019 Mass assignment](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa6-mass-assignment.md)
+
+### Recommendations
+
+* External API interfaces should be decoupled from the internal data implementation. Avoid binding API contracts directly to data contracts in backend services. Review the API design frequently, and deprecate and remove legacy properties using [versioning](/api-management-versions.md) in API Management.
+
+* Precisely define XML and JSON contracts in the API schema and use [validate content](validation-policies.md#validate-content) and [validate parameters](validation-policies.md#validate-parameters) policies to block requests and responses with undocumented properties. Blocking requests with undocumented properties mitigates attacks, while blocking responses with undocumented properties makes it harder to reverse-engineer potential attack vectors.
+
+* If the backend interface can't be changed, use [transformation policies](transform-api.md) to rewrite request and response payloads and decouple the API contracts from backend contracts. For example, mask or filter data or [remove unneeded JSON properties](./policies/filter-response-content.md).
+
+## Security misconfiguration
+
+Attackers may attempt to exploit security misconfiguration vulnerabilities such as:
+
+* Missing security hardening
+* Unnecessary enabled features
+* Network connections unnecessarily open to the internet
+* Use of weak protocols or ciphers
+* Other settings or endpoints that may allow unauthorized access to the system
+
+More information about this threat: [API7:2019 Security misconfiguration](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa7-security-misconfiguration.md)
+
+### Recommendations
+
+* Correctly configure [gateway TLS](api-management-howto-manage-protocols-ciphers.MD). Don't use vulnerable protocols (for example, TLS 1.0, 1.1) or ciphers.
+
+* Configure APIs to accept encrypted traffic only, for example through HTTPS or WSS protocols.
+
+* Consider deploying API Management behind a [private endpoint](private-endpoint.md) or attached to a [virtual network deployed in internal mode](api-management-using-with-internal-vnet.md). In internal networks, access can be controlled from within the private network (via firewall or network security groups) and from the internet (via a reverse proxy).
+
+* Use Azure API Management policies:
+
+ * Always inherit parent policies through the `<base>` tag.
+
+ * When using OAuth 2.0, configure and test the [validate JWT](api-management-access-restriction-policies.md#ValidateJWT) policy to check the existence and validity of the JWT token before it reaches the backend. Automatically check the token expiration time, token signature, and issuer. Enforce claims, audiences, token expiration, and token signature through policy settings.
+
+ * Configure the [CORS](api-management-cross-domain-policies.md#CORS) policy and don't use wildcard `*` for any configuration option. Instead, explicitly list allowed values.
+
+ * Set [validation policies](validation-policies.md) to `prevent` in production environments to validate JSON and XML schemas, headers, query parameters, and status codes, and to enforce the maximum size for request or response.
+
+ * If API Management is outside a network boundary, client IP validation is still possible using the [restrict caller IPs](api-management-access-restriction-policies.md#RestrictCallerIPs) policy. Ensure that it uses an allowlist, not a blocklist.
+
+ * If client certificates are used between caller and API Management, use the [validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) policy. Ensure that the `validate-revocation`, `validate-trust`, `validate-not-before`, and `validate-not-after` attributes are all set to `true`.
+
+ * Client certificates (mutual TLS) can also be applied between API Management and the backend. The backend should:
+
+ * Have authorization credentials configured
+
+ * Validate the certificate chain where applicable
+
+ * Validate the certificate name where applicable
+
+* For GraphQL scenarios, use the [validate GraphQL request](graphql-policies.md#validate-graphql-request) policy. Ensure that the `authorization` element and `max-size` and `max-depth` attributes are set.
+
+* Don't store secrets in policy files or in source control. Always use API Management [named values](api-management-howto-properties.md) or fetch the secrets at runtime using custom policy expressions.
+
+ * Named values should be [integrated with Key Vault](api-management-howto-properties.md#key-vault-secrets) or encrypted within API Management by marking them "secret". Never store secrets in plain-text named values.
+
+* Publish APIs through [products](api-management-howto-add-products.md), which require subscriptions. Don't use [open products](api-management-howto-add-products.md#access-to-product-apis) that don't require a subscription.
+
+* Use Key Vault integration to manage all certificates ΓÇô this centralizes certificate management and can help to ease operations management tasks such as certificate renewal or revocation.
+
+* When using the [self-hosted-gateway](self-hosted-gateway-overview.md), ensure that there's a process in place to update the image to the latest version periodically.
+
+* Represent backend services as [backend entities](backends.md). Configure authorization credentials, certificate chain validation, and certificate name validation where applicable.
+
+* When using the [developer portal](api-management-howto-developer-portal.md):
+
+ * If you choose to [self-host](developer-portal-self-host.md) the developer portal, ensure there's a process in place to periodically update the self-hosted portal to the latest version. Updates for the default managed version are automatic.
+
+ * Use [Azure Active Directory (Azure AD)](api-management-howto-aad.md) or [Azure Active Directory B2C](api-management-howto-aad-b2c.md) for user sign-up and sign-in. Disable the default username and password authentication, which is less secure.
+
+ * Assign [user groups](api-management-howto-create-groups.md#-associate-a-group-with-a-product) to products, to control the visibility of APIs in the portal.
+
+* Use [Azure Policy](security-controls-policy.md) to enforce API Management resource-level configuration and role-based access control (RBAC) permissions to control resource access. Grant minimum required privileges to every user.
+
+* Use a [DevOps process](devops-api-development-templates.md) and infrastructure-as-code approach outside of a development environment to ensure consistency of API Management content and configuration changes and to minimize human errors.
+
+* Don't use any deprecated features.
+
+## Injection
+
+Any endpoint accepting user data is potentially vulnerable to an injection exploit. Examples include, but aren't limited to:
+
+* [Command injection](https://owasp.org/www-community/attacks/Command_Injection), where a bad actor attempts to alter the API request to execute commands on the operating system hosting the API
+
+* [SQL injection](https://owasp.org/www-community/attacks/SQL_Injection), where a bad actor attempts to alter the API request to execute commands and queries against the database an API depends on
+
+More information about this threat: [API8:2019 Injection](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa8-injection.md)
+
+### Recommendations
+
+* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](../frontdoor/front-door-overview.md).
+
+ > [!IMPORTANT]
+ > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](api-management-access-restriction-policies.md#RestrictCallerIPs), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
+
+* Use schema and parameter [validation](validation-policies.md) policies, where applicable, to further constrain and validate the request before it reaches the backend API service.
+
+ The schema supplied with the API definition should have a regex pattern constraint applied to vulnerable fields. Each regex should be tested to ensure that it constrains the field sufficiently to mitigate common injection attempts.
+
+### Related information
+
+* [Deployment stamps pattern with Azure Front Door and API Management](/azure/architecture/patterns/deployment-stamp)
+
+* [Deploy Azure API Management with Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md)
+
+## Improper assets management
+
+Vulnerabilities related to improper assets management include:
+
+* Lack of proper API documentation or ownership information
+
+* Excessive numbers of older API versions, which may be missing security fixes
+
+More information about this threat: [API9:2019 Improper assets management](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa9-improper-assets-management.md)
+
+### Recommendations
+
+* Use a well-defined [OpenAPI specification](https://swagger.io/specification/) as the source for importing REST APIs. The specification allows encapsulation of the API definition, including self-documenting metadata.
+
+ * Use API interfaces with precise paths, data schemas, headers, query parameters, and status codes. Avoid [wildcard operations](add-api-manually.md#add-and-test-a-wildcard-operation). Provide descriptions for each API and operation and include contact and license information.
+
+ * Avoid endpoints that donΓÇÖt directly contribute to the business objective. They unnecessarily increase the attack surface area and make it harder to evolve the API.
+
+* Use [revisions](api-management-revisions.md) and [versions](api-management-versions.md) in API Management to govern and control the API endpoints. Have a strong backend versioning strategy and commit to a maximum number of supported API versions (for example, 2 or 3 prior versions). Plan to quickly deprecate and ultimately remove older, often less secure, API versions.
+
+* Use an API Management instance per environment (such as development, test, and production). Ensure that each API Management instance connects to its dependencies in the same environment. For example, in the test environment, the test API Management resource should connect to a test Azure Key Vault resource and the test versions of backend services. Use [DevOps automation and infrastructure-as-code practices](devops-api-development-templates.md) to help maintain consistency and accuracy between environments and reduce human errors.
+
+* Use tags to organize APIs and products and group them for publishing.
+
+* Publish APIs for consumption through the built-in [developer portal](api-management-howto-developer-portal.md). Make sure the API documentation is up-to-date.
+
+* Discover undocumented or unmanaged APIs and expose them through API Management for better control.
+
+## Insufficient logging and monitoring
+
+Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems to tamper with, and extract or destroy data. Most breach studies demonstrate that the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring.
+
+More information about this threat: [API10:2019 Insufficient logging and monitoring](https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xaa-insufficient-logging-monitoring.md)
+
+### Recommendations
+
+* Understand [observability options](observability.md) in Azure API Management and [best practices](/azure/architecture/best-practices/monitoring) for monitoring in Azure.
+
+* Monitor API traffic with [Azure Monitor](api-management-howto-use-azure-monitor.md).
+
+* Log to [Application Insights](api-management-howto-app-insights.md) for debugging purposes. Correlate [transactions in Application Insights](../azure-monitor/app/transaction-diagnostics.md) between API Management and the backend API to [trace them end-to-end](../azure-monitor/app/correlation.md).
+
+* If needed, forward custom events to [Event Hubs](api-management-howto-log-event-hubs.md).
+
+* Set alerts in Azure Monitor and Application Insights - for example, for the [capacity metric](api-management-howto-autoscale.md) or for excessive requests or bandwidth transfer.
+
+* Use the [emit metrics](api-management-advanced-policies.md#emit-metrics) policy for custom metrics.
+
+* Use the Azure Activity log for tracking activity in the service.
+
+* Use custom events in [Azure Application Insights](../azure-monitor/app/api-custom-events-metrics.md) and [Azure Monitor](../azure-monitor/app/custom-data-correlation.md) as needed.
+
+* Configure [OpenTelemetry](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md#introduction-to-opentelemetry) for [self-hosted gateways](self-hosted-gateway-overview.md) on Kubernetes.
+
+## Next steps
+
+* [Security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline)
+* [Security controls by Azure policy](security-controls-policy.md)
+* [Landing zone accelerator for API Management](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator)
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
First, enable Azure Active Directory authentication to the Azure database by ass
1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
-1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az-ad-user-list) and replace *\<user-principal-name>*. The result is saved to a variable.
```azurecli-interactive azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure SQL Database](#tab/sqldatabase)
-3. Add this Azure AD user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az-sql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az sql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name ADMIN --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure Database for MySQL](#tab/mysql)
-3. Add this Azure AD user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az_mysql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az mysql server ad-admin create`](/cli/azure/mysql/server/ad-admin#az-mysql-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az mysql server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
# [Azure Database for PostgreSQL](#tab/postgresql)
-3. Add this Azure AD user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az_postgres_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
+3. Add this Azure AD user as an Active Directory administrator using [`az postgres server ad-admin create`](/cli/azure/postgres/server/ad-admin#az-postgres-server-ad-admin-create) command in the Cloud Shell. In the following command, replace *\<group-name>* and *\<server-name>* with your own parameters.
```azurecli-interactive az postgres server ad-admin create --resource-group <group-name> --server-name <server-name> --display-name <user-principal-name> --object-id $azureaduser
First, enable Azure Active Directory authentication to the Azure database by ass
Next, you configure your App Service app to connect to SQL Database with a managed identity.
-1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az_webapp_identity_assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
+1. Enable a managed identity for your App Service app with the [az webapp identity assign](/cli/azure/webapp/identity#az-webapp-identity-assign) command in the Cloud Shell. In the following command, replace *\<app-name>*.
# [System-assigned identity](#tab/systemassigned/sqldatabase)
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
recommendations: false
# Compose custom models v3.0 | Preview > [!NOTE]
-> This how-to guide references Form Recognizer v3.0 (preview). To use Form Recognizer v2.1 (GA), see [Compose custom models v2.1.](compose-custom-models.md).
+> This how-to guide references Form Recognizer v3.0 (preview). To use Form Recognizer v2.1 (GA), see [Compose custom models v2.1](compose-custom-models.md).
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign up to 100 trained custom models to a single composed model. When analyze documents with a composed model, Form Recognizer will first classify the form you submitted, then choose the best matching assigned model, and return results the results.
+A composed model is created by taking a collection of custom models and assigning them to a single model ID. You can assign up to 100 trained custom models to a single composed model ID. When a document is submitted to a composed model, the service performs a classification step to decide which custom model accurately represents the form presented for analysis. Composed models are useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-To learn more, see [Composed custom models](concept-composed-models.md)
+To learn more, see [Composed custom models](concept-composed-models.md).
In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
In this article, you'll learn how to create and use composed custom models to an
To get started, you'll need the following resources:
-* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
+* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/).
* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
To get started, you'll need the following resources:
## Create your custom models
-First, you'll need to a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
+First, you'll need a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
* [**Assemble your training dataset**](#assemble-your-training-dataset) * [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.mi
### [Form Recognizer Studio](#tab/studio)
-To create custom models, you start with configuring your project:
+To create custom models, start with configuring your project:
-1. From the Studio home, select the [Custom form project](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to open the Custom form home page.
+1. From the Studio homepage, select [**Create new**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) from the Custom model card.
1. Use the Γ₧ò **Create a project** command to start the new project configuration wizard.
See [Form Recognizer Studio: labeling as tables](quickstarts/try-v3-form-recogni
### [REST API](#tab/rest)
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
Training with labels leads to better performance in some scenarios. To train wit
|Language |Method| |--|--| |**C#**|[**StartBuildModel**](/dotnet/api/azure.ai.formrecognizer.documentanalysis.documentmodeladministrationclient.startbuildmodel?view=azure-dotnet-preview#azure-ai-formrecognizer-documentanalysis-documentmodeladministrationclient-startbuildmodel&preserve-view=true)|
-|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
+|**Java**| [**beginBuildModel**](/java/api/com.azure.ai.formrecognizer.administration.documentmodeladministrationclient.beginbuildmodel?view=azure-java-preview&preserve-view=true)|
|**JavaScript** | [**beginBuildModel**](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-preview#@azure-ai-form-recognizer-documentmodeladministrationclient-beginbuildmodel&preserve-view=true)| | **Python** | [**begin_build_model**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.aio.documentmodeladministrationclient?view=azure-python-preview#azure-ai-formrecognizer-aio-documentmodeladministrationclient-begin-build-model&preserve-view=true)
When you train models using the [**Form Recognizer Studio**](https://formrecogni
1. Once the model is ready, use the **Test** command to validate it with your test documents and observe the results. -- #### Analyze documents The custom model **Analyze** operation requires you to provide the `modelID` in the call to Form Recognizer. You should provide the composed model ID for the `modelID` parameter in your applications.
The custom model **Analyze** operation requires you to provide the `modelID` in
#### Manage your composed models You can manage your custom models throughout life cycles:
-
+ * Test and validate new documents. * Download your model to use in your applications. * Delete your model when its lifecycle is complete.
The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services
#### Analyze documents
-You can make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) request using a unique model name in the request parameters.
+To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) request, use a unique model name in the request parameters.
:::image type="content" source="media/custom-model-analyze-request.png" alt-text="Screenshot of a custom model request URL.":::
You can use the programming language of your choice to create a composed model:
#### Analyze documents
-Once you have built your composed model, it can be used to analyze forms and documents You can use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
+Once you've built your composed model, you can use it to analyze forms and documents. Use your composed `model ID` and let the service decide which of your aggregated custom models fits best according to the document provided.
|Programming language| Code sample | |--|--|
Once you have built your composed model, it can be used to analyze forms and doc
## Manage your composed models
-Custom models can be managed throughout their lifecycle. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
+You can manage a custom models at each stage in its life cycles. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
|Programming language| Code sample | |--|--|
Custom models can be managed throughout their lifecycle. You can view a list of
## Next steps
-Try one of our quickstarts to get started using Form Recognizer preview
+Try one of our Form Recognizer quickstarts:
> [!div class="nextstepaction"] > [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md)
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
In this article, you'll learn how to create Form Recognizer custom and composed
## Sample Labeling tool
-You can see how data is extracted from custom forms by trying our Sample Labeling tool. You'll need the following resources:
+Try extracting data from custom forms using our Sample Labeling tool. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You can see how data is extracted from custom forms by trying our Sample Labelin
In the Form Recognizer UI: 1. Select **Use Custom to train a model with labels and get key value pairs**.
-
- :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot: FOTT tool select custom option.":::
+
+ :::image type="content" source="media/label-tool/fott-use-custom.png" alt-text="Screenshot of the FOTT tool select custom model option.":::
1. In the next window, select **New project**:
- :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot: FOTT tool select new project.":::
+ :::image type="content" source="media/label-tool/fott-new-project.png" alt-text="Screenshot of the FOTT tool select new project option.":::
## Create your models
You [train your model](./quickstarts/try-sdk-rest-api.md#train-a-custom-model)
When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model. Add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key value pairs and tables using supervised learning capabilities.
[Get started with Train with labels](label-tool.md)
When you train models using the [**Form Recognizer Sample Labeling tool**](https
### [**REST API**](#tab/rest-api)
-The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model), will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
+The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#train-a-custom-model) will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model:
:::image type="content" source="media/model-id.png" alt-text="Screenshot: the returned location header containing the model ID.":::
The [**REST API**](./quickstarts/try-sdk-rest-api.md?pivots=programming-language
#### Compose your custom models
-After you have gathered your custom models corresponding to a single form type, you can compose them into a single model.
+After you've gathered your custom models corresponding to a single form type, you can compose them into a single model.
### [**Form Recognizer Sample Labeling tool**](#tab/fott)
The **Sample Labeling tool** enables you to quickly get started training models
After you have completed training, compose your models as follows:
-1. On the left rail menu, select the **Model Compose icon** (merging arrow).
+1. On the left rail menu, select the **Model Compose** icon (merging arrow).
1. In the main window, select the models you wish to assign to a single model ID. Models with the arrows icon are already composed models.
After you have completed training, compose your models as follows:
When the operation completes, your newly composed model will appear in the list.
- :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot: model compose window." lightbox="media/custom-model-compose-expanded.png":::
+ :::image type="content" source="media/custom-model-compose.png" alt-text="Screenshot of the model compose window." lightbox="media/custom-model-compose-expanded.png":::
### [**REST API**](#tab/rest-api)
Use the programming language code of your choice to create a composed model that
### [**Form Recognizer Sample Labeling tool**](#tab/fott)
-1. On the tool's left-pane menu, select the **Analyze icon** (lightbulb).
+1. On the tool's left-pane menu, select the **Analyze icon** (light bulb).
1. Choose a local file or image URL to analyze.
Using the programming language of your choice to analyze a form or document with
-Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that were not part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
+Test your newly trained models by [analyzing forms](./quickstarts/try-sdk-rest-api.md#analyze-forms-with-a-custom-model) that weren't part of the training dataset. Depending on the reported accuracy, you may want to do further training to improve the model. You can continue further training to [improve results](label-tool.md#improve-results).
## Manage your custom models You can [manage your custom models](./quickstarts/try-sdk-rest-api.md#manage-custom-models) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetCustomModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetCustomModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/DeleteCustomModel) from your account.
-Great! You have learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
+Great! You've learned the steps to create custom and composed models and use them in your Form Recognizer projects and applications.
## Next steps
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
Title: Form Recognizer business card model
-description: Concepts encompassing data extraction and analysis using prebuilt business card model
+description: Concepts related to data extraction and analysis using the prebuilt business card model.
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false- <!-- markdownlint-disable MD033 -->
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-businessCard**|
+|**Business card model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-businessCard**|
### Try Form Recognizer
You'll need a business card document. You can use our [sample business card docu
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li></ul> | Autodetected |
+|Business card| <ul><li>English (United States)ΓÇöen-US</li><li> English (Australia)ΓÇöen-AU</li><li>English (Canada)ΓÇöen-CA</li><li>English (United Kingdom)ΓÇöen-GB</li><li>English (India)ΓÇöen-IN</li><li>English (Japan)ΓÇöen-JP</li><li>Japanese (Japan)ΓÇöja-JP</li></ul> | Autodetected (en-US or ja-JP) |
## Field extraction
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 03/25/2022 Last updated : 06/06/2022 recommendations: false
recommendations: false
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-01-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
+* ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-06-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results. * For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates. * The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
With composed models, you can assign multiple custom models to a composed model
### Composed model compatibility
- |Custom model type | API Version |Custom form 2021-01-30-preview (v3.0)| Custom document 2021-01-30-preview(v3.0) | Custom form GA version (v2.1) or earlier|
+ |Custom model type | API Version |Custom form 2021-06-30-preview (v3.0)| Custom document 2021-06-30-preview(v3.0) | Custom form GA version (v2.1) or earlier|
|--|--|--|--|--|
-|**Custom template** (updated custom form)| 2021-01-30-preview | &#10033;| Γ£ô | X |
-|**Custom neural**| trained with current API version (2021-01-30-preview) |Γ£ô |Γ£ô | X |
+|**Custom template** (updated custom form)| 2021-06-30-preview | &#10033;| Γ£ô | X |
+|**Custom neural**| trained with current API version (2021-06-30-preview) |Γ£ô |Γ£ô | X |
|**Custom form**| Custom form GA version (v2.1) or earlier | X | X| ✓| **Table symbols**: ✔—supported; **X—not supported; ✱—unsupported for this API version, but will be supported in a future API version.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Title: Form Recognizer custom neural model
-description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents
+description: Learn about custom neural (neural) model type, its features and how you train a model with high accuracy to extract data from structured and unstructured documents.
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Custom neural models or neural models are a deep learned model that combines lay
|semi-structured | invoices, purchase orders | |unstructured | contracts, letters|
-Custom neural models share the same labeling format and strategy as custom template models. Currently custom neural models only support a subset of the field types supported by custom template models.
+Custom neural models share the same labeling format and strategy as [custom template](concept-custom-template.md) models. Currently custom neural models only support a subset of the field types supported by custom template models.
## Model capabilities Custom neural models currently only support key-value pairs and selection marks, future releases will include support for structured fields (tables) and signature.
-| Form fields | Selection marks | Tables | Signature | Region |
-|--|--|--|--|--|
-| Supported| Supported | Unsupported | Unsupported | Unsupported |
+| Form fields | Selection marks | Tabular fields | Signature | Region |
+|:--:|:--:|:--:|:--:|:--:|
+| Supported | Supported | Supported | Unsupported | Unsupported |
+
+## Tabular fields
+
+With the release of API version **2022-06-30-preview**, custom neural models will support tabular fields (tables):
+
+* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
+
+Tabular fields support **cross page tables** by default:
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
## Supported regions
-In public preview custom neural models can only be trained in select Azure regions.
+For the **2022-06-30-preview**, custom neural models can only be trained in the following Azure regions:
* AustraliaEast * BrazilSouth
In public preview custom neural models can only be trained in select Azure regio
* WestUS2 * WestUS3
-You can copy a model trained in one of the regions listed above to any other region for use.
+> [!TIP]
+> You can copy a model trained in one of the select regions listed above to **any other region** and use it accordingly.
## Best practices
-Custom neural models differ from custom template models in a few different ways.
+Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model and test to determine if it supports your functional needs.
-### Dealing with variations
+### Dealing with variations
Custom neural models can generalize across different formats of a single document type. As a best practice, create a single model for all variations of a document type. Add at least five labeled samples for each of the different variations to the training dataset.
Custom neural models are only available in the [v3 API](v3-migration-guide.md).
| Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom document | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom document | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": "string",
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Custom template models share the same labeling format and strategy as custom neu
## Model capabilities
-Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
+Custom template models support key-value pairs, selection marks, tables, signature fields, and selected regions.
-| Form fields | Selection marks | Structured fields (Tables) | Signature | Selected regions |
-|--|--|--|--|--|
+| Form fields | Selection marks | Tabular fields (Tables) | Signature | Selected regions |
+|:--:|:--:|:--:|:--:|:--:|
| Supported| Supported | Supported | Preview | Supported |
-## Dealing with variations
+## Tabular fields
-Template models rely on a defined visual template, changes to the template will result in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. When dealing with subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
+With the release of API version **2022-06-30-preview**, custom template models will support tabular fields (tables):
+
+* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation.
+
+Tabular fields support **cross page tables** by default:
+
+* To label a table that spans multiple pages, label each row of the table across the different pages in a single table.
+* As a best practice, ensure that your dataset contains a few samples of the expected variations. For example, include samples where the entire table is on a single page and where tables span two or more pages.
+
+Tabular fields are also useful when extracting repeating information within a document that isn't recognized as a table. For example, a repeating section of work experiences in a resume can be labeled and extracted as a tabular field.
+
+## Dealing with variations
+
+Template models rely on a defined visual template, changes to the template will result in lower accuracy. In those instances, split your training dataset to include at least five samples of each template and train a model for each of the variations. You can then [compose](concept-composed-models.md) the models into a single endpoint. For subtle variations, like digital PDF documents and images, it's best to include at least five examples of each type in the same training dataset.
## Training a model
Template models are available generally [v2.1 API](https://westus.dev.cognitive.
| Model | REST API | SDK | Label and Test Models| |--|--|--|--|
-| Custom template (preview) | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom template (preview) | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
| Custom template | [Form Recognizer 2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm)| [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Form Recognizer Sample labeling tool](https://fott-2-1.azurewebsites.net/)| On the v3 API, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```. ```REST
-https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": "string",
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Previously updated : 03/10/2022 Last updated : 06/06/2022 recommendations: false
Your training set will consist of structured documents where the formatting and
### Custom neural model
-The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
## Build mode
The following tools are supported by Form Recognizer v3.0:
### Try Form Recognizer
-See how data is extracted from your specific or unique documents by using custom models. You need the following resources:
+Try extracting data from your specific or unique documents using custom models. You need the following resources:
* An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services/). * A [Form Recognizer instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
The following table describes the features available with the associated tools a
| Document type | REST API | SDK | Label and Test Models| |--|--|--|--| | Custom form 2.1 | [Form Recognizer 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Form Recognizer SDK](quickstarts/get-started-sdk-rest-api.md?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)|
-| Custom template 3.0 | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
-| Custom neural | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+| Custom template 3.0 | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)|
+| Custom neural | [Form Recognizer 3.0 (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)| [Form Recognizer Preview SDK](quickstarts/try-v3-python-sdk.md)| [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
> [!NOTE]
The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) doesn't support
* **Custom model API (v3.0)**: This version supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not. * [Form Recognizer v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the preview version in your applications and workflows.
-* [REST API (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument): This API shows you more about the preview version and new capabilities.
+* [REST API (preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument): This API shows you more about the preview version and new capabilities.
### Try signature detection
Explore Form Recognizer quickstarts and REST APIs:
| Quickstart | REST API| |--|--|
-|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-01-30-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)|
+|[v3.0 Studio quickstart](quickstarts/try-v3-form-recognizer-studio.md) |[Form Recognizer v3.0 API 2022-06-30](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)|
| [v2.1 quickstart](quickstarts/get-started-sdk-rest-api.md) | [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/BuildDocumentModel) |
applied-ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-general-document.md
Title: Form Recognizer general document model | Preview
-description: Concepts encompassing data extraction and analysis using prebuilt general document preview model
+description: Concepts related to data extraction and analysis using prebuilt general document preview model
Previously updated : 03/08/2022 Last updated : 06/06/2022 recommendations: false
The General document preview model combines powerful Optical Character Recogniti
The general document API supports most form types and will analyze your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels. > [!NOTE]
-> The ```2022-01-30-preview``` update to the general document model adds support for selection marks.
+> The ```2022-06-30``` update to the general document model adds support for selection marks.
## General document features
-* The general document model is a pre-trained model, doesn't require labels or training.
+* The general document model is a pre-trained model; it doesn't require labels or training.
-* A single API extracts key-value pairs, selection marks entities, text, tables, and structure from documents.
+* A single API extracts key-value pairs, selection marks, entities, text, tables, and structure from documents.
* The general document model supports structured, semi-structured, and unstructured documents. * Key names are spans of text within the document that are associated with a value. - * Selection marks are identified as fields with a value of ```:selected:``` or ```:unselected:``` ***Sample document processed in the Form Recognizer Studio***
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | |-|-|
-|🆕 **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
+|🆕 **General document model**|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
### Try Form Recognizer
-See how data is extracted from forms and documents using the Form Recognizer Studio or our Sample Labeling tool.
+Try extracting data from forms and documents using the Form Recognizer Studio.
You'll need the following resources:
You'll need the following resources:
## Key-value pairs
-Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field or in an unstructured document they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
-Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document and if you have documents where same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on what the document contained.
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. If you have documents where the same value is described in different ways, for example, customer and user, the associated key will be either customer or user based on context.
## Entities Natural language processing models can identify parts of speech and classify each token or word. The named entity recognition model is able to identify entities like people, locations, and dates to provide for a richer experience. Identifying entities enables you to distinguish between customer types, for example, an individual or an organization.
-The key value pair extraction model and entity identification model are run in parallel on the entire document and not just on the values of the extracted key-value pairs. This process ensures that complex structures where a key can't be identified is still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
+The key-value pair extraction model and entity identification model are run in parallel on the entire documentΓÇönot just on the values of the extracted key-value pairs. This process ensures that complex structures where a key can't be identified are still enriched by identifying the entities referenced. You can still match keys or values to entities based on the offsets of the identified spans.
* The general document is a pre-trained model and can be directly invoked via the REST API.
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
Title: Form Recognizer ID document model
-description: Concepts encompassing data extraction and analysis using the prebuilt ID document model
+description: Concepts related to data extraction and analysis using the prebuilt ID document model
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false- <!-- markdownlint-disable MD033 -->
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-idDocument**|
+|**ID document model**|<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-idDocument**|
### Try Form Recognizer
-See how to extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
+Extract data, including name, birth date, machine-readable zone, and expiration date, from ID documents using the Form Recognizer Studio or our Sample Labeling tool. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You'll need an ID document. You can use our [sample ID document](https://raw.git
## Form Recognizer preview v3.0
- The Form Recognizer preview introduces several new features and capabilities:
+ The Form Recognizer preview v3.0 introduces several new features and capabilities:
-* **ID document (v3.0)** model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+* **ID document (v3.0)** prebuilt model supports extraction of endorsement, restriction, and vehicle class codes from US driver's licenses.
+
+* The ID Document **2022-06-30-preview** release supports the following data extraction from US driver's licenses:
+
+ * Date issued
+ * Height
+ * Weight
+ * Eye color
+ * Hair color
+ * Document discriminator security code
### ID document preview field extraction |Name| Type | Description | Standardized output| |:--|:-|:-|:-|
-| 🆕 Endorsements | String | Additional driving privileges granted to a driver such as Motorcycle or School bus. | |
-| 🆕 Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
-| 🆕VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
+| 🆕 DateOfIssue | Date | Issue date | yyyy-mm-dd |
+| 🆕 Height | String | Height of the holder. | |
+| 🆕 Weight | String | Weight of the holder. | |
+| 🆕 EyeColor | String | Eye color of the holder. | |
+| 🆕 HairColor | String | Hair color of the holder. | |
+| 🆕 DocumentDiscriminator | String | Document discriminator is a security code that identifies where and when the license was issued. | |
+| Endorsements | String | More driving privileges granted to a driver such as Motorcycle or School bus. | |
+| Restrictions | String | Restricted driving privileges applicable to suspended or revoked licenses.| |
+| VehicleClassification | String | Types of vehicles that can be driven by a driver. ||
| CountryRegion | countryRegion | Country or region code compliant with ISO 3166 standard | | | DateOfBirth | Date | DOB | yyyy-mm-dd | | DateOfExpiration | Date | Expiration date DOB | yyyy-mm-dd |
applied-ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md
Title: Form Recognizer invoice model
-description: Concepts encompassing data extraction and analysis using prebuilt invoice model
+description: Concepts related to data extraction and analysis using prebuilt invoice model
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-invoice**|
+|**Invoice model** | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-invoice**|
### Try Form Recognizer
You'll need an invoice document. You can use our [sample invoice document](https
|--|:-|:| |Invoice| <ul><li>English (United States)ΓÇöen-US</li></ul>| English (United States)ΓÇöen-US| |Invoice| <ul><li>SpanishΓÇöes</li></ul>| Spanish (United States)ΓÇöes|
+|Invoice (preview)| <ul><li>GermanΓÇöde</li></ul>| German (Germany)-de|
+|Invoice (preview)| <ul><li>FrenchΓÇöfr</li></ul>| French (France)ΓÇöfr|
+|Invoice (preview)| <ul><li>ItalianΓÇöit</li></ul>| Italian (Italy)ΓÇöit|
+|Invoice (preview)| <ul><li>PortugueseΓÇöpt</li></ul>| Portuguese (Portugal)ΓÇöpt|
+|Invoice (preview)| <ul><li>DutchΓÇönl</li></ul>| Dutch (Netherlands)ΓÇönl|
## Field extraction
Following are the line items extracted from an invoice in the JSON output respon
| Unit | String| The unit of the line item, e.g, kg, lb etc. | Hours | | | Date | Date| Date corresponding to each line item. Often it's a date the line item was shipped | 3/4/2021| 2021-03-04 | | Tax | Number | Tax associated with each line item. Possible values include tax amount, tax %, and tax Y/N | 10% | |
-| VAT | Number | Stands for Value added tax. This is a flat tax levied on an item. Common in European countries | &euro;20.00 | |
+| VAT | Number | Stands for Value added tax. VAT is a flat tax levied on an item. Common in European countries | &euro;20.00 | |
-The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+The invoice key-value pairs and line items extracted are in the `documentResults` section of the JSON output.
+
+### Key-value pairs (Preview)
+
+The prebuilt invoice **2022-06-30-preview** release returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures.
+
+Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. If you have documents where the same value is described in different ways, for example, a customer or a user, the associated key will be either customer or user based on context.
## Form Recognizer preview v3.0
The invoice key-value pairs and line items extracted are in the `documentResults
* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-* Explore our [**REST API (preview)**](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
## Next steps
The invoice key-value pairs and line items extracted are in the `documentResults
* Explore our REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0 (Preview)](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
-
+ > [Form Recognizer API v3.0 (Preview)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
+ > [!div class="nextstepaction"] > [Form Recognizer API v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/5ed8c9843c2794cbb1a96291)
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false-+ # Form Recognizer layout model
The Form Recognizer Layout API extracts text, tables, selection marks, and struc
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
-**Data extraction features**
+## Supported document types
-| **Layout model** | **Text Extraction** | **Selection Marks** | **Tables** |
+| **Model** | **Images** | **PDF** | **TIFF** |
| | | | | | Layout | Γ£ô | Γ£ô | Γ£ô |
+### Data extraction
+
+| **Model** | **Text** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** |
+| | | | | | |
+| Layout | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+**Supported paragraph roles**:
+
+* title
+* sectionHeading
+* footnote
+* pageHeader
+* pageFooter
+* pageNumber
+
+For a richer semantic analysis, paragraph roles are best used with unstructured documents to better understand the layout of the extracted content.
+ ## Development options The following tools are supported by Form Recognizer v2.1:
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|||
-|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-layout**|
+|**Layout model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript SDK**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|**prebuilt-layout**|
-### Try Form Recognizer
+## Try Form Recognizer
-See how data is extracted from forms and documents using the Form Recognizer Studio or Sample Labeling tool. You'll need the following resources:
+Try extracting data from forms and documents using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data is extracted from forms and documents using the Form Recognizer Stu
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+### Form Recognizer Studio (preview)
> [!NOTE] > Form Recognizer studio is available with the preview (v3.0) API. ***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)*** 1. On the Form Recognizer Studio home page, select **Layout**
See how data is extracted from forms and documents using the Form Recognizer Stu
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
-#### Sample Labeling tool
-
-You'll need a form document. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf).
-
-1. On the Sample Labeling tool home page, select **Use Layout to get text, tables, and selection marks**.
-
-1. Select **Local file** from the dropdown menu.
-
-1. Upload your file and select **Run Layout**
-
- :::image type="content" source="media/try-layout.png" alt-text="Screenshot: Screenshot: Sample Labeling tool dropdown layout file source selection menu.":::
-
- > [!div class="nextstepaction"]
- > [Try Sample Labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)
- ## Input requirements * For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned).
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier).
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.-
-> [!NOTE]
-> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
+* The minimum height of the text to be extracted is 12 pixels for a 1024 X 768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales *See* [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-## Data extraction
+## Model extraction
-The layout model extracts table structures, selection marks, typeface and handwritten text, and bounding box coordinates from your documents.
-
-### Tables and table headers
+The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Tables can be complex with merged cells or columns, with or without borders, and with odd angles. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding box is output along with information whether it's recognized as part of a header or not. The model predicted header cells can span multiple rows and aren't necessarily the first rows in a table. They also work with rotated tables. Each table cell also includes the full text with references to the individual words in the `readResults` section.
+### Text lines and words
+Layout API extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines, if detected, along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
### Selection marks
-Layout API also extracts selection marks from documents. Extracted selection marks include the bounding box, confidence, and state (selected/unselected). Selection mark information is extracted in the `readResults` section of the JSON output.
-
+Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
-### Text lines and words
-
-The layout model extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Typeface and handwritten text is extracted from lines and words. The service then returns bounding box coordinates, confidence scores, and style (handwritten or other). All the text information is included in the `readResults` section of the JSON output.
+### Tables and table headers
+Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
-### Natural reading order for text lines (Latin only)
+### Paragraphs
-In Form Recognizer v2.1, you can specify the order in which the text lines are output with the `readingOrder` query parameter. Use `natural` for a more human-friendly reading order output as shown in the following example. This feature is only supported for Latin languages.
+The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
-In Form Recognizer v3.0, the natural reading order output is used by the service in all cases. Therefore, there's no `readingOrder` parameter provided in this version.
+### Paragraph roles
-### Handwritten classification for text lines (Latin only)
+The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages.
+| **Predicted role** | **Description** |
+| | |
+| `title` | The main heading(s) in the page |
+| `sectionHeading` | One or more subheading(s) on the page |
+| `footnote` | Text near the bottom of the page |
+| `pageHeader` | Text near the top edge of the page |
+| `pageFooter` | Text near the bottom edge of the page |
+| `pageNumber` | Page number |
### Select page numbers or ranges for text extraction
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
-
-## Form Recognizer preview v3.0
-
- The Form Recognizer preview introduces several new features and capabilities.
-
-* Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
-
-* Explore our [**REST API (preview)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) to learn more about the preview version and new capabilities.
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
## Next steps
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Title: Form Recognizer models
-description: Concepts encompassing data extraction and analysis using prebuilt models.
+description: Concepts related to data extraction and analysis using prebuilt models.
Previously updated : 03/16/2022 Last updated : 06/06/2022 recommendations: false
# Form Recognizer models
-Azure Form Recognizer prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. Prebuilt models use optical character recognition (OCR) combined with deep learning models to identify and extract predefined text and data fields common to specific form and document types. Form Recognizer extracts analyzes form and document data then returns an organized, structured JSON response. Form Recognizer v2.1 supports invoice, receipt, ID document, and business card models.
+ Azure Form Recognizer supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Form Recognizer can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
## Model overview
The W-2 model analyzes and extracts key information reported in each box on a W-
[:::image type="icon" source="media/studio/layout.png":::](https://formrecognizer.appliedai.azure.com/studio/layout)
-The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from forms and documents.
+The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
***Sample document processed using the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***: > [!div class="nextstepaction"]
+>
> [Learn more: layout model](concept-layout.md) ### Invoice [:::image type="icon" source="media/studio/invoice.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)
-The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports both English and Spanish invoices.
+The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
***Sample invoice processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
The invoice model analyzes and extracts key information from sales invoices. The
[:::image type="icon" source="media/studio/receipt.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-The receipt model analyzes and extracts key information from printed and handwritten receipts.
+* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
+
+* The preview version v3.0 also supports single-page hotel receipt processing.
***Sample receipt processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The business card model analyzes and extracts key information from business card
[:::image type="icon" source="media/studio/custom.png":::](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)
-The custom model analyzes and extracts data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+
+* The preview version v3.0 custom model supports signature detection in custom forms (template model) and cross-page tables in both template and neural models.
***Sample custom template processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
The custom model analyzes and extracts data from forms and documents specific to
A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
-***Composed model dialog window[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
+***Composed model dialog window in [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
:::image type="content" source="media/studio/composed-model.png" alt-text="Screenshot of Form Recognizer Studio compose custom model dialog window.":::
A composed model is created by taking a collection of custom models and assignin
## Model data extraction
- | **Data extraction** | **Text extraction** |**Key-Value pairs** |**Fields**|**Selection Marks** | **Tables** |**Entities** |
-| |:: |::|:: |:: |:: |:: |
-|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | || | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | ✓ | ✓ | ✓ ||
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ || ✓ | ✓ | ✓ |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | || Γ£ô | Γ£ô | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô ||
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
-| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | Γ£ô |Γ£ô| | ||
-| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô| | ||
-| [Custom](concept-custom.md#compare-model-features) |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
+ | **Model ID** | **Text extraction** | **Selection Marks** | **Tables** | **Paragraphs** | **Key-Value pairs** | **Fields** |**Entities** |
+ |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+|🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | | | ✓ | | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | | ✓ | | ✓ | |
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [prebuilt-businessCard](concept-business-card.md#field-extraction) | Γ£ô | | | Γ£ô | | Γ£ô | |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
## Input requirements
A composed model is created by taking a collection of custom models and assignin
> [!NOTE] > The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
-## Form Recognizer preview v3.0
-
- Form Recognizer v3.0 (preview) introduces several new features and capabilities:
-
-* [**Read (preview)**](concept-read.md) model is a new API that extracts text lines, words, their locations, detected languages, and handwritten text, if detected.
-* [**General document (preview)**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents.
-* [**Receipt (preview)**](concept-receipt.md) model supports single-page hotel receipt processing.
-* [**ID document (preview)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**W-2 (preview)**](concept-w2.md) model supports employee, employer, wage information, etc. from US W-2 forms.
-* [**Custom model API (preview)**](concept-custom.md) supports signature detection for custom forms.
- ### Version migration Learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md)
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Title: Read - Form Recognizer
+ Title: Read OCR - Form Recognizer
-description: Learn concepts related to Read API analysis with Form Recognizer APIΓÇöusage and limits.
+description: Learn concepts related to Read OCR API analysis with Form Recognizer APIΓÇöusage and limits.
Previously updated : 03/09/2022 Last updated : 06/06/2022 recommendations: false
-# Form Recognizer read model
+# Form Recognizer Read OCR model
-Form Recognizer v3.0 preview includes the new Read API model. The read model extracts typeface and handwritten text including mixed languages in documents. The read model can detect lines, words, locations, and languages and is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
+Form Recognizer v3.0 preview includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+
+## Supported document types
+
+| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** |
+| | | | | | | | |
+| Read | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+
+### Data extraction
+
+| **Read model** | **Text** | **[Language detection](language-support.md#detected-languages-read-api)** |
+| | | |
+prebuilt-read | Γ£ô |Γ£ô |
## Development options
The following resources are supported by Form Recognizer v3.0:
|-||| |**Read model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-rest-api)</li><li>[**C# SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-csharp)</li><li>[**Python SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-python)</li><li>[**Java SDK**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-java)</li><li>[**JavaScript**](how-to-guides/use-prebuilt-read.md?pivots=programming-language-javascript)</li></ul>|**prebuilt-read**|
-## Data extraction
-
-| **Read model** | **Text Extraction** | **[Language detection](language-support.md#detected-languages-read-api)** |
-| | | |
-prebuilt-read | Γ£ô |Γ£ô |
-
-### Try Form Recognizer
+## Try Form Recognizer
-See how text is extracted from forms and documents using the Form Recognizer Studio. You'll need the following assets:
+Try extracting text from forms and documents using the Form Recognizer Studio. You'll need the following assets:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how text is extracted from forms and documents using the Form Recognizer Stu
:::image type="content" source="media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
-#### Form Recognizer Studio (preview)
+### Form Recognizer Studio (preview)
> [!NOTE]
-> Form Recognizer studio is available with the preview (v3.0) API.
+> Form Recognizer studio is available with the preview (v3.0) API. The latest service preview is currently not enabled for analyzing Microsoft Word, Excel, PowerPoint, and HTML file formats using the Form Recognizer Studio.
***Sample form processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/read)***
See how text is extracted from forms and documents using the Form Recognizer Stu
## Input requirements
-* For best results, provide one clear photo or high-quality scan per document.
-* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location.
+* Supported file formats: These include JPEG/JPG, PNG, BMP, TIFF, PDF (text-embedded or scanned). Additionally, Microsoft Word, Excel, PowerPoint, and HTML files are supported with the Read API in **2022-06-30-preview**.
* For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed).
-* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier (4 MB for the free tier)
+* The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
+* The minimum height of the text to be extracted is 12 pixels for a 1024X768 image. This dimension corresponds to about eight font point text at 150 DPI.
## Supported languages and locales Form Recognizer preview version supports several languages for the read model. *See* our [Language Support](language-support.md) for a complete list of supported handwritten and printed languages.
-## Features
+## Data detection and extraction
-### Text lines and words
+### Pages
-Read API extracts text from documents and images. It accepts PDFs and images of documents and handles printed and/or handwritten text, and supports mixed languages. Text is extracted as text lines, words, bounding boxes, confidence scores, and style, whether handwritten or not, supported for Latin languages only.
+With the added support for Microsoft Word, Excel, PowerPoint, and HTML files, the page units in the model output are computed as shown:
-### Language detection
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Images | Each image = 1 page unit | Total images |
+|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
+|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+### Text lines and words
-Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the language at the text line level along with the confidence score.
+Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-### Handwritten classification for text lines (Latin only)
+For Microsoft Word, Excel, PowerPoint, and HTML file formats, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages.
+### Language detection
+
+Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict all detected languages for text lines along with the `confidence` in the `languages` collection under `analyzeResult`.
### Select page (s) for text extraction
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+
+> [!NOTE]
+> For Microsoft Word, Excel, PowerPoint, and HTML file formats, the Read API ignores the pages parameter and extracts all pages by default.
## Next steps
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Title: Form Recognizer receipt model
-description: Concepts encompassing data extraction and analysis using the prebuilt receipt model
+description: Concepts related to data extraction and analysis using the prebuilt receipt model
Previously updated : 03/11/2022 Last updated : 06/06/2022 recommendations: false
# Form Recognizer receipt model
-The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, tax, and transaction total and returns a structured JSON data representation.
+The receipt model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from sales receipts. Receipts can be of various formats and quality including printed and handwritten receipts. The API extracts key information such as merchant name, merchant phone number, transaction date, total tax, and transaction total and returns a structured JSON data representation.
***Sample receipt processed with [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)***:
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | Model ID | |-|-|--|
-|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li></ul>|**prebuilt-receipt**|
+|**Receipt model**| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li></ul>|**prebuilt-receipt**|
### Try Form Recognizer
See how data, including time and date of transactions, merchant information, and
#### Sample Labeling tool (API v2.1)
-You will need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
+You'll need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
1. On the Sample Labeling tool home page, select **Use prebuilt model to get data**.
You will need a receipt document. You can use our [sample receipt document](http
* Supported file formats: JPEG/JPG, PNG, BMP, TIFF, and PDF (text-embedded or scanned). Text-embedded PDFs are best to eliminate the possibility of error in character extraction and location. * For PDF and TIFF, up to 2000 pages can be processed (with a free tier subscription, only the first two pages are processed). * The file size must be less than 500 MB for paid (S0) tier and 4 MB for free (F0) tier.
-* Image dimensions must be between 50 x 50 pixels and 10000 x 10000 pixels.
+* Image dimensions must be between 50 x 50 pixels and 10,000 x 10,000 pixels.
* PDF dimensions are up to 17 x 17 inches, corresponding to Legal or A3 paper size, or smaller. * The total size of the training data is 500 pages or less. * If your PDFs are password-locked, you must remove the lock before submission.
You will need a receipt document. You can use our [sample receipt document](http
| TransactionTime | Time | Time the receipt was issued | hh-mm-ss (24-hour) | | Total | Number (USD)| Full transaction total of receipt | Two-decimal float| | Subtotal | Number (USD) | Subtotal of receipt, often before taxes are applied | Two-decimal float|
-| Tax | Number (USD) | Tax on receipt (often sales tax or equivalent) | Two-decimal float |
+ | Tax | Number (USD) | Total tax on receipt (often sales tax or equivalent). **Renamed to "TotalTax" in 2022-06-30-preview version**. | Two-decimal float |
| Tip | Number (USD) | Tip included by buyer | Two-decimal float| | Items | Array of objects | Extracted line items, with name, quantity, unit price, and total price extracted | |
-| Name | String | Item name | |
-| Quantity | Number | Quantity of each item | Integer |
+| Name | String | Item description. **Renamed to "Description" in 2022-06-30-preview version**. | |
+| Quantity | Number | Quantity of each item | Two-decimal float |
| Price | Number | Individual price of each item unit| Two-decimal float |
-| Total Price | Number | Total price of line item | Two-decimal float |
+| TotalPrice | Number | Total price of line item | Two-decimal float |
## Form Recognizer preview v3.0
You will need a receipt document. You can use our [sample receipt document](http
| Items.*.Category | String | Item category, for example, Room, Tax, etc. | | | Items.*.Date | Date | Item date | yyyy-mm-dd | | Items.*.Description | String | Item description | |
-| Items.*.TotalPrice | Number | Item total price | Integer |
+| Items.*.TotalPrice | Number | Item total price | Two-decimal float |
| Locale | String | Locale of the receipt, for example, en-US. | ISO language-county code | | MerchantAddress | String | Listed address of merchant | | | MerchantAliases | Array| | |
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Title: Form Recognizer W-2 form prebuilt model
+ Title: Form Recognizer W-2 prebuilt model
-description: Data extraction and analysis extraction using the prebuilt-tax Form W-2 model
+description: Data extraction and analysis extraction using the prebuilt W-2 model
Previously updated : 03/25/2022 Last updated : 06/06/2022 recommendations: false
A W-2 is a multipart form divided into state and federal sections and consisting
## Development options
-The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the following tools:
+The prebuilt W-2 model is supported by Form Recognizer v3.0 with the following tools:
| Feature | Resources | Model ID | |-|-|--|
The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the follo
### Try Form Recognizer
-See how data is extracted from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
+Try extracting data from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
> [!NOTE] > Form Recognizer studio is available with v3.0 preview API.
-1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2 form**.
+1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2**.
1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
| Model | LanguageΓÇöLocale code | Default | |--|:-|:|
-|prebuilt-tax.us.w2| <ul>English (United States)</ul></br>|English (United States)ΓÇöen-US|
+|prebuilt-tax.us.w2|<ul><li>English (United States)</li></ul>|English (United States)ΓÇöen-US|
## Field extraction
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
| TaxYear | | Number | Tax year | 2020 | | W2FormVariant | | String | The variants of W-2 forms, including "W-2", "W-2AS", "W-2CM", "W-2GU", "W-2VI" | W-2 | - ### Migration guide and REST API v3.0 * Follow our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) to learn how to use the preview version in your applications and workflows.
See how data is extracted from W-2 forms using the Form Recognizer Studio. You'l
## Next steps * Complete a Form Recognizer quickstart:-
-|Programming language | :::image type="content" source="media/form-recognizer-icon.png" alt-text="Form Recognizer icon from the Azure portal."::: |Programming language
-|::|::|::|
-|[**C#**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)||[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)|
-|[**Java**](quickstarts/try-v3-java-sdk.md#prebuilt-model)||[**Python**](quickstarts/try-v3-python-sdk.md#prebuilt-model)|
-|[**REST API**](quickstarts/try-v3-rest-api.md)|||
+> [!div class="checklist"]
+>
+> * [**REST API**](quickstarts/try-v3-rest-api.md)
+> * [**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)
+> * [**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)
+> * [**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)
+> * [**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Previously updated : 03/25/2022 Last updated : 06/06/2022 # Configure Form Recognizer containers
> > Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. For more information, See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container).
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have both required and optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
Each container has the following configuration settings:
|Required|Setting|Purpose| |--|--|--| |Yes|[Key](#key-and-billing-configuration-setting)|Tracks billing information.|
-|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. _See_ [Billing]](form-recognizer-container-install-run.md#billing), for more information. For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
+|Yes|[Billing](#key-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information, _see_ [Billing](form-recognizer-container-install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.|
-|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetry support to your container.|
+|No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) customer content support to your container.|
|No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.| |No|HTTP Proxy|Configures an HTTP proxy for making outbound requests.| |No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. |
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Previously updated : 01/06/2022 Last updated : 06/06/2022 recommendations: false #Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal.
Let's get started:
1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
-1. If your overview page does not have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
+1. If your overview page doesn't have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL":::
That's it! You're now ready to start automating data extraction using Azure Form
* Try the [Form Recognizer Studio](concept-form-recognizer-studio.md), an online tool for visually exploring, understanding, and integrating features from the Form Recognizer service into your applications.
-* Complete a Form Recognizer [C#](quickstarts/try-v3-csharp-sdk.md),[Python](quickstarts/try-v3-python-sdk.md), [Java](quickstarts/try-v3-java-sdk.md), or [JavaScript](quickstarts/try-v3-javascript-sdk.md) quickstart and get started creating a document processing app in the development language of your choice.
+* Complete a Form Recognizer quickstart and get started creating a document processing app in the development language of your choice:
+
+ * [C#](quickstarts/try-v3-csharp-sdk.md)
+ * [Python](quickstarts/try-v3-python-sdk.md)
+ * [Java](quickstarts/try-v3-java-sdk.md)
+ * [JavaScript](quickstarts/try-v3-javascript-sdk.md)
applied-ai-services Use Prebuilt Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/use-prebuilt-read.md
recommendations: false
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-01-30-preview```.
+The current API version is ```2022-06-30```.
::: zone pivot="programming-language-csharp"
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
Previously updated : 04/22/2022 Last updated : 06/06/2022
Pre-Built Receipt and Business Cards support all English receipts and business c
|English (India|`en-in`| |English (United States)| `en-us`|
+## Business card model
+
+The **2022-06-30-preview** release includes Japanese language support:
+
+|Language| Locale code |
+|:--|:-:|
+| Japanese | `ja` |
+ ## Invoice model Language| Locale code | |:--|:-:|
-|English (United States)|en-us|
-|Spanish (preview) | es |
+|English (United States) |en-US|
+|Spanish| es|
+|German (**2022-06-30-preview**)| de|
+|French (**2022-06-30-preview**)| fr|
+|Italian (**2022-06-30-preview**)|it|
+|Portuguese (**2022-06-30-preview**)|pt|
+|Dutch (**2022-06-30-preview**)| nl|
## ID documents
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 03/08/2022 Last updated : 06/06/2022 recommendations: false keywords: automated data processing, document processing, automated data entry, forms processing #Customer intent: As a developer of form-processing software, I want to learn what the Form Recognizer service does so I can determine if I should use it.- <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 -->
Form Recognizer uses the following models to easily identify, extract, and analy
**Document analysis models**
-* [**Read model**](concept-read.md) | Extract typeface and handwritten text lines, words, locations, and detected languages from documents and images.
-* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents (PDF and TIFF) and images (JPG, PNG, and BMP).
+* [**Read model**](concept-read.md) | Extract text lines, words, locations, and detected languages from documents and images.
+* [**Layout model**](concept-layout.md) | Extract text, tables, selection marks, and structure information from documents and images.
* [**General document model**](concept-general-document.md) | Extract key-value pairs, selection marks, and entities from documents. **Prebuilt models**
This section helps you decide which Form Recognizer v3.0 supported feature you s
| What type of document do you want to analyze?| How is the document formatted? | Your best solution | | --|-| -| |<ul><li>**W-2 Form**</li></yl>| Is your W-2 document composed in United States English (en-US) text?|<ul><li>If **Yes**, use the [**W-2 Form**](concept-w2.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**Text-only document**</li></yl>| Is your text-only document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) or, if handwritten, is it composed in English?|<ul><li>If **Yes**, use the [**Read**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
-|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in English or Spanish text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
+|<ul><li>**Primarily text content**</li></yl>| Is your document _printed_ in a [supported language](language-support.md#read-layout-and-custom-form-template-model) and are you only interested in text and not tables, selection marks, and the structure?|<ul><li>If **Yes** to text-only extraction, use the [**Read**](concept-read.md) model.<li>If **No**, because you also need structure information, use the [**Layout**](concept-layout.md) model.</li></ul>
+|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document (preview)**](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul>
+|<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>
|<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>| |<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>| |<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model).
The following features and development options are supported by the Form Recogn
|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>| |[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
-|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
+|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>|
|[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**Business card model**](concept-business-card.md) |Automated data processing and extraction of key information from business cards.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
This documentation contains the following article types:
> [!div class="checklist"] > > * Try our [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com)
-> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument) to learn more.
+> * Explore the [**REST API reference documentation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) to learn more.
> * If you're familiar with a previous version of the API, see the [**What's new**](./whats-new.md) article to learn of recent changes. ### [Form Recognizer v2.1](#tab/v2-1)
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 03/16/2022 Last updated : 06/06/2022 recommendations: false
[Reference documentation](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/@azure/ai-form-recognizer_4.0.0-beta.3/sdk/formrecognizer/ai-form-recognizer/) | [Package (npm)](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.3) | [Samples](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v4-bet)
-Get started with Azure Form Recognizer using the JavaScript programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+Get started with Azure Form Recognizer using the JavaScript programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDKs into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page. In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
-* [🆕 **General document**](#general-document-model)—Analyze and extract common fields from specific document types using a pre-trained invoice model.
+* [🆕 **General document**](#general-document-model)—Analyze and extract key-value pairs, selection marks, and entities from documents.
* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
+* [**Prebuilt Invoice**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained invoice model.
## Prerequisites
Extract text, tables, structure, key-value pairs, and named entities from docume
const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
Extract text, selection marks, text styles, table structures, and bounding regio
const { AzureKeyCredential, DocumentAnalysisClient } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
// using the PrebuiltModels object, rather than the raw model ID, adds strong typing to the model's output const { PrebuiltModels } = require("@azure/ai-form-recognizer");
- // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
- const key = "<your-endpoint>";
- const endpoint = "<your-key>";
+ // set `<your-key>` and `<your-endpoint>` variables with the values from the Azure portal.
+ const key = "<your-key>";
+ const endpoint = "<your-endpoint>";
// sample document const invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf";
In this quickstart, you used the Form Recognizer JavaScript SDK to analyze vario
## Next steps > [!div class="nextstepaction"]
-> [REST API v3.0reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+> [REST API v3.0reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
> [!div class="nextstepaction"] > [Form Recognizer JavaScript reference library](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.1/https://docsupdatetracker.net/index.html)
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 03/24/2022 Last updated : 06/06/2022
-# Get started: Form Recognizer REST API 2022-01-30-preview
+# Get started: Form Recognizer REST API 2022-06-30-preview
<!-- markdownlint-disable MD036 --> >[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-01-30-preview```.
+The current API version is **2022-06-30-preview**.
-| [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
+| [Form Recognizer REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
To learn more about Form Recognizer features and development options, visit our
**Custom Models** * CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types.
-* Composed customΓÇöCompose a collection of custom models and assign them to a single model built from your form types.
+* Composed customΓÇöCompose a collection of custom models and assign them to a single model ID.
## Prerequisites
Before you run the cURL command, make the following changes:
#### POST request ```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-06-30" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
``` #### Reference table
You'll receive a `202 (Success)` response that includes an **Operation-Location*
### Get analyze results (GET Request)
-After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal. 1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-1. Replace `{modelID}` with the same model name you used to analyze your document.
+1. Replace `{modelID}` with the same modelID you used to analyze your document.
1. Replace `{resultID}` with the result ID from the [Operation-Location](#operation-location) header. <!-- markdownlint-disable MD024 --> #### GET request ```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+<<<<<<< HEAD
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-06-30" -H "Ocp-Apim-Subscription-Key: {key}"
+=======
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelID}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
+>>>>>>> resolve-merge-conflict
``` #### Examine the response
You'll receive a `200 (Success)` response with JSON output. The first field, `"s
"createdDateTime": "2022-03-25T19:31:37Z", "lastUpdatedDateTime": "2022-03-25T19:31:43Z", "analyzeResult": {
- "apiVersion": "2022-01-30-preview",
+ "apiVersion": "2022-06-30",
"modelId": "prebuilt-invoice", "stringIndexType": "textElements"... ..."pages": [
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Previously updated : 05/23/2022 Last updated : 06/06/2022
For the usage with [Form Recognizer SDK](quickstarts/try-v3-csharp-sdk.md), [For
| Adjustable | No | No | | **Max size of OCR json response** | 500 MB | 500 MB | | Adjustable | No | No |
+| **Max number of Template models** | 500 | 5000 |
+| Adjustable | No | No |
+| **Max number of Neural models** | 100 | 500 |
+| Adjustable | No | No |
# [Form Recognizer v3.0 (Preview)](#tab/v30)
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Previously updated : 02/15/2022 Last updated : 06/06/2022 recommendations: false
Form Recognizer v3.0 (preview) introduces several new features and capabilities:
* [**Custom document model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents. * [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom forms.
+* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
+* [**Custom model API (v3.0)**](overview.md) supports analysis of all the newly added prebuilt models. For a complete list of prebuilt models, see the [overview](overview.md) page.
In this article, you'll learn the differences between Form Recognizer v2.1 and v3.0 and how to move to the newer version of the API.
+> [!CAUTION]
+>
+> * REST API **2022-06-30-preview** release includes a breaking change in the REST API analyze response JSON.
+> * The `boundingBox` property is renamed to `polygon` in each instance.
+ ## Changes to the REST API endpoints The v3.0 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (prebuilt-layout) and prebuilt models.
In this article, you'll learn the differences between Form Recognizer v2.1 and v
### POST request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
``` ### GET request ```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-01-30-preview
+https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-06-30
``` ### Analyze operation
https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/
| Model | v2.1 | v3.0 | |:--| :--| :--| | **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
-|🆕 **General document**|N/A|/documentModels/prebuilt-document:analyze |
-| **Layout**| /layout/analyze |/documentModels/prebuilt-layout:analyze|
-|**Custom**| /custom/{modelId}/analyze |/documentModels/{modelId}:analyze |
-| **Invoice** | /prebuilt/invoice/analyze | /documentModels/prebuilt-invoice:analyze |
-| **Receipt** | /prebuilt/receipt/analyze | /documentModels/prebuilt-receipt:analyze |
-| **ID document** | /prebuilt/idDocument/analyze | /documentModels/prebuilt-idDocument:analyze |
-|**Business card**| /prebuilt/businessCard/analyze| /documentModels/prebuilt-businessCard:analyze|
-|**W-2**| /prebuilt/w-2/analyze| /documentModels/prebuilt-w-2:analyze|
+|🆕 **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
+| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`|
+|**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` |
+| **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
+| **Receipt** | /prebuilt/receipt/analyze | `/documentModels/prebuilt-receipt:analyze` |
+| **ID document** | /prebuilt/idDocument/analyze | `/documentModels/prebuilt-idDocument:analyze` |
+|**Business card**| /prebuilt/businessCard/analyze| `/documentModels/prebuilt-businessCard:analyze`|
+|**W-2**| /prebuilt/w-2/analyze| `/documentModels/prebuilt-w-2:analyze`|
### Analyze request body
Base64 encoding is also supported in Form Recognizer v3.0:
Parameters that continue to be supported:
-* `pages`
-* `locale`
+* `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
+* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. "en", "fr") or BCP 47 language tag (ex. "en-US").
-Parameters no longer supported:
+Parameters no longer supported:
* includeTextDetails
Analyze response has been refactored to the following top-level results to suppo
{ // Basic analyze result metadata
-"apiVersion": "2022-01-30-preview", // REST API version used
+"apiVersion": "2022-06-30", // REST API version used
"modelId": "prebuilt-invoice", // ModelId used "stringIndexType": "textElements", // Character unit used for string offsets and lengths: // textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
Analyze response has been refactored to the following top-level results to suppo
"angle": 0, // Orientation of content in clockwise direction (degree) "width": 0, // Page width "height": 0, // Page height
-"unit": "pixel", // Unit for width, height, and bounding box coordinates
+"unit": "pixel", // Unit for width, height, and polygon coordinates
"spans": [ // Parts of top-level content covered by page { "offset": 0, // Offset in content
Analyze response has been refactored to the following top-level results to suppo
{ "rowCount": 1, // Number of rows in table "columnCount": 1, // Number of columns in table
-"boundingRegions": [ // Bounding boxes potentially across pages covered by table
+"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table
{ "pageNumber": 1, // 1-indexed page number
-"boundingBox": [ ... ], // Bounding box
+"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-06-30-preview API
} ], "spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
Analyze response has been refactored to the following top-level results to suppo
] } -- ``` ## Build or train model
The model object has three updates in the new API
* ```modelId``` is now a property that can be set on a model for a human readable name. * ```modelName``` has been renamed to ```description```
-* ```buildMode``` is a new proerty with values of ```template``` for custom form models or ```neural``` for custom document models.
+* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom document models.
-The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL is not the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed``` and the error is returned.
+The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed``` and the error is returned.
The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-preview
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
{ "modelId": {modelId},
POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build
Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties. ```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-01-30-preview
+POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-06-30
{ "modelId": "{composedModelId}", "description": "{composedModelDescription}",
The only changes to the copy model function are:
***Authorize the copy*** ```json
-POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-01-30-preview
+POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-06-30
{ "modelId": "{targetModelId}", "description": "{targetModelDescription}",
POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-versio
Use the response body from the authorize action to construct the request for the copy. ```json
-POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-01-30-preview
+POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copy-to?api-version=2022-06-30
{ "targetResourceId": "{targetResourceId}", "targetResourceRegion": "{targetResourceRegion}",
List models have been extended to now return prebuilt and custom models. All pre
***Sample list models request*** ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-06-30
``` ## Change to get model
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-ve
As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type is described by its name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-06-30
``` ## New get info operation
GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{model
The ```info``` operation on the service returns the custom model count and custom model limit. ```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-01-30-preview
+GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-06-30
``` ***Sample response***
GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=202
In this migration guide, you've learned how to upgrade your existing Form Recognizer application to use the v3.0 APIs. Continue to use the 2.1 API for all GA features and use the 3.0 API for any of the preview features.
-* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
* [What is Form Recognizer?](overview.md) * [Form Recognizer quickstart](./quickstarts/try-sdk-rest-api.md)
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 02/28/2022 Last updated : 06/06/2022 - <!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates.
+## June 2022
+
+### Form Recognizer v3.0 preview release (beta.3)
+
+The **2022-06-30-preview** release is the latest update to the Form Recognizer service for v3.0 capabilities. There are considerable updates across the feature APIs:
+
+* [🆕 **Layout extends structure extraction**](concept-layout.md). Layout now includes added structure elements including sections, section headers, and paragraphs. This update enables finer grain document segmentation scenarios. For a complete list of structure elements identified, _see_ [enhanced structure](concept-layout.md#data-extraction).
+* [🆕 **Custom neural model tabular fields support**](concept-custom-neural.md). Custom document models now support tabular fields. Tabular fields by default are also multi page. To learn more about tabular fields in custom neural models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [🆕 **Custom template model tabular fields support for cross page tables**](concept-custom-template.md). Custom form models now support tabular fields across pages. To learn more about tabular fields in custom template models, _see_ [tabular fields](concept-custom-neural.md#tabular-fields).
+* [🆕 **Invoice model output now includes general document key-value pairs**](concept-custom-template.md). Where invoices contain required fields beyond the fields included in the prebuilt model, the general document model supplements the output with key-value pairs. _See_ [key value pairs](concept-invoice.md#key-value-pairs-preview).
+* [🆕 **Invoice language expansion**](concept-custom-template.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales).
+* [🆕 **Prebuilt business card**](concept-business-card.md). The business card model now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales).
+* [🆕 **Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
+ ## February 2022 ### Form Recognizer v3.0 preview release
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices. * [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
-* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten support for the same features expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish languages.
+* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten language support expands to Japanese and Korean.
Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API. #### Form Recognizer model data extraction
- | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** |
+ | **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Entities** |**Signatures**|
| | :: |::| :: | :: |:: |
- |🆕Read | ✓ | | | | |
- |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ |
- | Layout | Γ£ô | | Γ£ô | Γ£ô | |
- | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô ||
- |Receipt | Γ£ô | Γ£ô | | ||
- | ID document | Γ£ô | Γ£ô | | ||
- | Business card | Γ£ô | Γ£ô | | ||
- | Custom |Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+ |🆕Read | ✓ | | | | | |
+ |🆕General document | ✓ | ✓ | ✓ | ✓ | ✓ | |
+ | Layout | Γ£ô | | Γ£ô | Γ£ô | | |
+ | Invoice | Γ£ô | Γ£ô | Γ£ô | Γ£ô || |
+ |Receipt | Γ£ô | Γ£ô | | || |
+ | ID document | Γ£ô | Γ£ô | | || |
+ | Business card | Γ£ô | Γ£ô | | || |
+ | Custom template |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Γ£ô |
+ | Custom neural |Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
#### Form Recognizer SDK beta preview release
pip package version 3.1.0b4
**Form Recognizer v2.1 public preview 3 is now available.** v2.1-preview.3 has been released, including the following features:
-* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses, such as name, date of birth, issue date, expiration date, and more.
+* **New prebuilt ID model** The new prebuilt ID model enables customers to take IDs and return structured data to automate processing. It combines our powerful Optical Character Recognition (OCR) capabilities with ID understanding models to extract key information from passports and U.S. driver licenses.
[Learn more about the prebuilt ID model](./concept-id-document.md)
pip package version 3.1.0b4
:::image type="content" source="./media/table-labeling.png" alt-text="Table labeling" lightbox="./media/table-labeling.png":::
- In addition to labeling tables, you can now label empty values and regions; if some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
+ In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
* **Support for 66 new languages** - The Layout API and Custom Models for Form Recognizer now support 73 languages.
pip package version 3.1.0b4
![Screenshot: Sample Labeling tool.](./media/ui-preview.jpg) * **Feedback Loop** - When Analyzing files via the Sample Labeling tool you can now also add it to the training set and adjust the labels if necessary and train to improve the model.
-* **Auto Label Documents** - Automatically labels additional documents based on previous labeled documents in the project.
+* **Auto Label Documents** - Automatically labels added documents based on previous labeled documents in the project.
## August 2020
pip package version 3.1.0b4
* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature. * **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Form Recognizer client objects in the SDKs.
-* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, *see* the SDK changelogs for more information.
+* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, _see_ the SDK changelogs.
* [C# SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) * [Python SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md) * [Java SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-jav)
pip package version 3.1.0b4
* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme) * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
- The new SDK supports all the features of the v2.0 REST API for Form Recognizer. For example, you can train a model with or without labels and extract text, key-value pairs and tables from your forms, extract data from receipts with the pre-built receipts service and extract text and tables with the layout service from your documents. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
+ The new SDK supports all the features of the v2.0 REST API for Form Recognizer. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
-* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource by calling the Copy Authorization operation against the target resource endpoint.
+* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
* [Generate a copy authorization](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModelAuthorization) REST API * [Copy a custom model](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/CopyCustomFormModel) REST API
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Follow the next steps to use a managed identity for Azure resources on a Hybrid
Get-AzVM -DefaultProfile $AzureContext | Select Name ```
- If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
+ If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you run the runbook in an Azure sandbox instead of Hybrid Runbook Worker and you want to use a user-assigned managed identity, then:
1. From line 5, remove `$AzureContext = (Connect-AzAccount -Identity).context`, 1. Replace it with `$AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context`, and 1. Enter the Client ID.
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-managing-data.md
The Automation geo-replication service isn't accessible directly to external cus
## Next steps
+* To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
* To learn more about secure assets in Azure Automation, see [Encryption of secure assets in Azure Automation](automation-secure-asset-encryption.md).- * To find out more about geo-replication, see [Creating and using active geo-replication](/azure/azure-sql/database/active-geo-replication-overview).
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
Title: Manage role permissions and security in Azure Automation
-description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management for Azure resources.
+description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management and role permissions for Azure resources.
Last updated 09/10/2021
#Customer intent: As an administrator, I want to understand permissions so that I use the least necessary set of permissions.
-# Manage role permissions and security in Automation
+# Manage role permissions and security in Azure Automation
Azure role-based access control (Azure RBAC) enables access management for Azure resources. Using [Azure RBAC](../role-based-access-control/overview.md), you can segregate duties within your team and grant only the amount of access to users, groups, and applications that they need to perform their jobs. You can grant role-based access to users using the Azure portal, Azure Command-Line tools, or Azure Management APIs.
When a user assigned to the Automation Operator role on the Runbook scope views
## Next steps
+* To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
* To find out more about Azure RBAC using PowerShell, see [Add or remove Azure role assignments using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md). * For details of the types of runbooks, see [Azure Automation runbook types](automation-runbook-types.md). * To start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-secure-asset-encryption.md
To revoke access to customer-managed keys, use PowerShell or the Azure CLI. For
## Next steps
+- To learn about security guidelines, see [Security best practices in Azure Automation](automation-security-guidelines.md).
- To understand Azure Key Vault, see [What is Azure Key Vault?](../key-vault/general/overview.md). - To work with certificates, see [Manage certificates in Azure Automation](shared-resources/certificates.md). - To handle credentials, see [Manage credentials in Azure Automation](shared-resources/credentials.md).
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
Title: Azure Automation security guidelines, security best practices Automation.
+ Title: Azure Automation security guidelines, security best practices Automation jobs.
description: This article helps you with the guidelines that Azure Automation offers to ensure a secured configuration of Automation account, Hybrid Runbook worker role, authentication certificate and identities, network isolation and policies.
Last updated 02/16/2022
-# Best practices for security in Azure Automation
+# Security best practices in Azure Automation
This article details the best practices to securely execute the automation jobs. [Azure Automation](./overview.md) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Title: Automation services in Azure - overview
-description: This article tells what are the Automation services in Azure and how to use it to automate the lifecycle of infrastructure and applications.
+description: This article tells what are the Automation services in Azure and how to compare and use it to automate the lifecycle of infrastructure and applications.
keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions Last updated 03/04/2022
azure-app-configuration Howto Disable Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-public-access.md
+
+ Title: How to disable public access in Azure App Configuration
+description: How to disable public access to your Azure App Configuration store.
++++ Last updated : 05/25/2022+++
+# Disable public access in Azure App Configuration
+
+In this article, you'll learn how to disable public access for your Azure App Configuration store. Setting up private access can offer a better security for your configuration store.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- We assume you already have an App Configuration store. If you want to create one, [create an App Configuration store](quickstart-aspnet-core-app.md).
+
+## Sign in to Azure
+
+You will need to sign in to Azure first to access the App Configuration service.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+Sign in to Azure using the `az login` command in the [Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli-interactive
+az login
+```
+
+This command will prompt your web browser to launch and load an Azure sign-in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Disable public access to a store
+
+Azure App Configuration offers three public access options:
+
+- Automatic public access: public network access is enabled, as long as you don't have a private endpoint present. Once you create a private endpoint, App Configuration disables public network access and enables private access. This option can only be selected when creating the store.
+- Disabled: public access is disabled and no traffic can access this resource unless it's through a private endpoint.
+- Enabled: all networks can access this resource.
+
+To disable access to the App Configuration store from public network, follow the process below.
+
+### [Portal](#tab/azure-portal)
+
+1. In your App Configuration store, under **Settings**, select **Networking**.
+1. Under **Public Access**, select **Disabled** to disable public access to the App Configuration store and only allow access through private endpoints. If you already had public access disabled and instead wanted to enable public access to your configuration store, you would select **Enabled**.
+
+ > [!NOTE]
+ > Once you've switched **Public Access** to **Disabled** or **Enabled**, you won't be able to select **Public Access: Automatic** anymore, as this option can only be selected when creating the store.
+
+1. Select **Apply**.
++
+### [Azure CLI](#tab/azure-cli)
+
+In the CLI, run the following code:
+
+```azurecli-interactive
+az appconfig update --name <name-of-the-appconfig-store> --enable-public-network false
+```
+
+> [!NOTE]
+> When you create an App Config store without specifying if you want public access to be enabled or disabled, public access is set to automatic by default. After you've run the `--enable-public-network` command, you won't be able to switch to an automatic public access anymore.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+>[Using private endpoints for Azure App Configuration](./concept-private-endpoint.md)
azure-arc Create Complete Managed Instance Indirectly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-indirectly-connected.md
NAME STATE
<namespace> Ready ```
-## Create Azure Arc-enabled SQL Managed Instance
+## Create an instance of Azure Arc-enabled SQL Managed Instance
Now, we can create the Azure MI for indirectly connected mode with the following command:
To connect with Azure Data Studio, see [Connect to Azure Arc-enabled SQL Managed
## Next steps
-[Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md).
+[Upload usage data, metrics, and logs to Azure](upload-metrics-and-logs-to-azure-monitor.md).
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
Title: Delete Azure Arc-enabled SQL Managed Instance
-description: Delete Azure Arc-enabled SQL Managed Instance
+ Title: Delete an Azure Arc-enabled SQL Managed Instance
+description: Learn how to delete an Azure Arc-enabled SQL Managed Instance and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
+
Last updated 07/30/2021
-# Delete Azure Arc-enabled SQL Managed Instance
-This article describes how you can delete an Azure Arc-enabled SQL Managed Instance.
+# Delete an Azure Arc-enabled SQL Managed Instance
+In this how-to guide, you'll find and then delete an Azure Arc-enabled SQL Managed Instance. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs).
-## View Existing Azure Arc-enabled SQL Managed Instances
-To view SQL Managed Instances, run the following command:
+1. Find existing Azure Arc-enabled SQL Managed Instances:
-```azurecli
-az sql mi-arc list --k8s-namespace <namespace> --use-k8s
-```
+ ```azurecli
+ az sql mi-arc list --k8s-namespace <namespace> --use-k8s
+ ```
-Output should look something like this:
+ Example output:
-```console
-Name Replicas ServerEndpoint State
- - - -
-demo-mi 1/1 10.240.0.4:32023 Ready
-```
+ ```console
+ Name Replicas ServerEndpoint State
+ - - -
+ demo-mi 1/1 10.240.0.4:32023 Ready
+ ```
-## Delete Azure Arc-enabled SQL Managed Instance
+1. Delete the SQL Managed Instance, run one of the commands appropriate for your deployment type:
-To delete a SQL Managed Instance, run the appropriate command for your deployment type. For example:
+ 1. **Indirectly connected mode**:
-### [Indirectly connected mode](#tab/indirectly)
+ ```azurecli
+ az sql mi-arc delete --name <instance_name> --k8s-namespace <namespace> --use-k8s
+ ```
-```azurecli
-az sql mi-arc delete -n <instance_name> --k8s-namespace <namespace> --use-k8s
-```
+ Example output:
-Output should look something like this:
+ ```azurecli
+ # az sql mi-arc delete --name demo-mi --k8s-namespace <namespace> --use-k8s
+ Deleted demo-mi from namespace arc
+ ```
-```azurecli
-# az sql mi-arc delete -n demo-mi --k8s-namespace <namespace> --use-k8s
-Deleted demo-mi from namespace arc
-```
+ 1. **Directly connected mode**:
-### [Directly connected mode](#tab/directly)
+ ```azurecli
+ az sql mi-arc delete --name <instance_name> --resource-group <resource_group>
+ ```
-```azurecli
-az sql mi-arc delete -n <instance_name> -g <resource_group>
-```
+ Example output:
-Output should look something like this:
+ ```azurecli
+ # az sql mi-arc delete --name demo-mi --resource-group my-rg
+ Deleted demo-mi from namespace arc
+ ```
-```azurecli
-# az sql mi-arc delete -n demo-mi -g my-rg
-Deleted demo-mi from namespace arc
-```
+## Optional - Reclaim Kubernetes PVCs
-
+A Persistent Volume Claim (PVC) is a request for storage by a user from a Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting PVCs is recommended but it isn't mandatory. However, if you don't reclaim these PVCs, you'll eventually end up with errors in your Kubernetes cluster. For example, you might be unable to create, read, update, or delete resources from the Kubernetes API. You might not be able to run commands like `az arcdata dc export` because the controller pods were evicted from the Kubernetes nodes due to storage issues (normal Kubernetes behavior). You can see messages in the logs similar to:
-## Reclaim the Kubernetes Persistent Volume Claims (PVCs)
+- Annotations: microsoft.com/ignore-pod-health: true
+- Status: Failed
+- Reason: Evicted
+- Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.
-A PersistentVolumeClaim (PVC) is a request for storage by a user from Kubernetes cluster while creating and adding storage to a SQL Managed Instance. Deleting a SQL Managed Instance does not remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). This is by design. The intention is to help the user to access the database files in case the deletion of instance was accidental. Deleting PVCs is not mandatory. However it is recommended. If you don't reclaim these PVCs, you'll eventually end up with errors as your Kubernetes cluster will run out of disk space or usage of the same SQL Managed Instance name while creating new instance might cause inconsistencies. To reclaim the PVCs, take the following steps:
+By design, deleting a SQL Managed Instance doesn't remove its associated [PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). The intention is to ensure that you can access the database files in case the deletion was accidental.
-### 1. List the PVCs for the server group you deleted
+1. To reclaim the PVCs, take the following steps:
+ 1. Find the PVCs for the server group you deleted.
-To list the PVCs, run the following command:
-```console
-kubectl get pvc
-```
+ ```console
+ kubectl get pvc
+ ```
-In the example below, notice the PVCs for the SQL Managed Instances you deleted.
+ In the example below, notice the PVCs for the SQL Managed Instances you deleted.
-```console
-# kubectl get pvc -n arc
+ ```console
+ # kubectl get pvc -n arc
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h
-logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h
-```
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ data-demo-mi-0 Bound pvc-1030df34-4b0d-4148-8986-4e4c20660cc4 5Gi RWO managed-premium 13h
+ logs-demo-mi-0 Bound pvc-11836e5e-63e5-4620-a6ba-d74f7a916db4 5Gi RWO managed-premium 13h
+ ```
-### 2. Delete each of the PVCs
-Delete the data and log PVCs for each of the SQL Managed Instances you deleted.
-The general format of this command is:
-```console
-kubectl delete pvc <name of pvc>
-```
+ 1. Delete the data and log PVCs for each of the SQL Managed Instances you deleted.
+ The general format of this command is:
-For example:
-```console
-kubectl delete pvc data-demo-mi-0 -n arc
-kubectl delete pvc logs-demo-mi-0 -n arc
-```
+ ```console
+ kubectl delete pvc <name of pvc>
+ ```
-Each of these kubectl commands will confirm the successful deleting of the PVC. For example:
-```console
-persistentvolumeclaim "data-demo-mi-0" deleted
-persistentvolumeclaim "logs-demo-mi-0" deleted
-```
-
+ For example:
-> [!NOTE]
-> As indicated, not deleting the PVCs might eventually get your Kubernetes cluster in a situation where it will throw errors. Some of these errors may include being unable to create, read, update, delete resources from the Kubernetes API, or being able to run commands like `az arcdata dc export` as the controller pods may be evicted from the Kubernetes nodes because of this storage issue (normal Kubernetes behavior).
->
-> For example, you may see messages in the logs similar to:
-> - Annotations: microsoft.com/ignore-pod-health: true
-> - Status: Failed
-> - Reason: Evicted
-> - Message: The node was low on resource: ephemeral-storage. Container controller was using 16372Ki, which exceeds its request of 0.
+ ```console
+ kubectl delete pvc data-demo-mi-0 -n arc
+ kubectl delete pvc logs-demo-mi-0 -n arc
+ ```
+ Each of these kubectl commands will confirm the successful deleting of the PVC. For example:
+
+ ```console
+ persistentvolumeclaim "data-demo-mi-0" deleted
+ persistentvolumeclaim "logs-demo-mi-0" deleted
+ ```
+
## Next steps Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s --t
Example: ```azurecli
-az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier bc --replicas 3
+az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier BusinessCritical --replicas 3
``` Directly connected mode:
az sql mi-arc create --name <name> --resource-group <group> --location <Azure l
``` Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier bc --replcias 3
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier BusinessCritical --replcias 3
``` By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance will be synchronously replicated to each of the secondary instances.
azure-arc Reference Az Arcdata Dc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc.md
Increase logging verbosity. Use `--debug` for full debug logs.
## az arcdata dc export Export metrics, logs or usage to a file. ```azurecli
-az arcdata dc export
+az arcdata dc export -t logs --path logs.json --k8s-namespace namespace --use-k8s
``` ### Global Arguments #### `--debug`
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
You can install the Azure Key Vault Secrets Provider extension on your connected
### Azure portal
-1. In the [Azure portal](https://portal/azure.com), navigate to **Kubernetes - Azure Arc** and select your cluster.
+1. In the [Azure portal](https://ms.portal.azure.com/#home), navigate to **Kubernetes - Azure Arc** and select your cluster.
1. Select **Extensions** (under **Settings**), and then select **+ Add**. [![Screenshot showing the Extensions page for an Arc-enabled Kubernetes cluster in the Azure portal.](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg)](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg#lightbox)
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
The application repository contains a `.pipeline` folder with the pipelines you'
| Pipeline file name | Description | | - | - |
-| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
-| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-ci-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
-| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** |
+| [`.pipelines/az-vote-pr-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-pr-pipeline.yaml) | The application PR pipeline, named **arc-cicd-demo-src PR** |
+| [`.pipelines/az-vote-ci-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CI pipeline, named **arc-cicd-demo-src CI** |
+| [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** |
### Connect Azure Container Registry to Azure DevOps During the CI process, you'll deploy your application containers to a registry. Start by creating an Azure service connection:
A successful CI pipeline run triggers the CD pipeline to complete the deployment
* View the Azure Vote app in your browser at `http://localhost:8080/` and verify the voting choices have changed to Tabs vs Spaces. 1. Repeat steps 1-7 for the `stage` environment.
-Your deployment is now complete. This ends the CI/CD workflow. Refer to the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/docs/azdo-gitops.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD pipelines used in this tutorial.
+Your deployment is now complete. This ends the CI/CD workflow. Refer to the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD pipelines used in this tutorial.
## Implement CI/CD with GitHub
The CD Stage workflow:
Once the manifests PR to the Stage environment is merged and Flux successfully applied all the changes, it updates Git commit status in the GitOps repository.
-Your deployment is now complete. This ends the CI/CD workflow. Refer to the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/FluxV2/docs/azdo-gitops-githubfluxv2.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD workflows used in this tutorial.
+Your deployment is now complete. This ends the CI/CD workflow. Refer to the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops-githubfluxv2.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD workflows used in this tutorial.
## Clean up resources
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Use GitOps with Flux v2 in Azure Arc-enabled Kubernetes or Azure Kubernetes Service (AKS) clusters" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters."
-keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
+keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops"
Previously updated : 05/24/2022 Last updated : 06/06/2022
Here's an example for including the [Flux image-reflector and image-automation c
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
+### Red Hat OpenShift onboarding guidance
+Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster prior to onboarding of the `microsoft.flux` extension.
+
+```console
+NS="flux-system"
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:kustomize-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:helm-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:source-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:notification-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-automation-controller
+oc adm policy add-scc-to-user nonroot system:serviceaccount:$NS:image-reflector-controller
+```
+
+For more information on OpenShift guidance for onboarding Flux, refer to the [Flux documentation](https://fluxcd.io/docs/use-cases/openshift/#openshift-setup).
+ ## Work with parameters For a description of all parameters that Flux supports, see the [official Flux documentation](https://fluxcd.io/docs/). Flux in Azure doesn't support all parameters yet. Let us know if a parameter you need is missing from the Azure implementation.
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 05/20/2022 Last updated : 06/06/2022
Use the Azure portal to create a script that automates the agent download and in
1. On the **Servers - Azure Arc** page, select **Add** near the upper left.-->
-1. [Go to the Azure portal page for adding servers with Azure Arc](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
+1. [Go to the Azure portal page for adding servers with Azure Arc](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
:::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png"::: > [!NOTE]
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-version.md
Previously updated : 10/07/2021 Last updated : 06/03/2022+ # Set Redis version for Azure Cache for Redis
-In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported.
+
+In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported.
> [!NOTE] > At this time, Redis 6 does not support ACL, and geo-replication between a Redis 4 and 6 cache. > ## Prerequisites+ * Azure subscription - [create one for free](https://azure.microsoft.com/free/) ## Create a cache using the Azure portal+ To create a cache, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
To create a cache, follow these steps:
1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**. :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis.":::
-
+ 1. On the **Basics** page, configure the settings for your new cache.
-
+ | Setting | Suggested value | Description | | | - | -- | | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
To create a cache, follow these steps:
| **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. | | **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. | | **Cache type** | Select a [cache tier and size](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
+ 1. On the **Advanced** page, choose a Redis version to use.
-
+ :::image type="content" source="media/cache-how-to-version/select-redis-version.png" alt-text="Redis version.":::
-1. Select **Create**.
-
- It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+1. Select **Create**.
+ It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
## Create a cache using Azure PowerShell ```azurepowershell New-AzRedisCache -ResourceGroupName "ResourceGroupName" -Name "CacheName" -Location "West US 2" -Size 250MB -Sku "Standard" -RedisVersion "6" ```+ For more information on how to manage Azure Cache for Redis with Azure PowerShell, see [here](cache-how-to-manage-redis-cache-powershell.md) ## Create a cache using Azure CLI
az redis create --resource-group resourceGroupName --name cacheName --location w
For more information on how to manage Azure Cache for Redis with Azure CLI, see [here](cli-samples.md) ## Upgrade an existing Redis 4 cache to Redis 6
-Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Please note that upgrading is permanent and it may cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. Please see [here](cache-how-to-import-export-data.md) for details on how to export.
+
+Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Upgrading is permanent and it might cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
> [!NOTE] > Please note, upgrading is not supported on a cache with a geo-replication link, so you will have to manually unlink your cache instances before upgrading.
Azure Cache for Redis supports upgrading your Redis cache server major version f
To upgrade your cache, follow these steps:
+### Upgrade using the Azure portal
+ 1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions. :::image type="content" source="media/cache-private-link/4-search-for-cache.png" alt-text="Search for Azure Cache for Redis.":::
To upgrade your cache, follow these steps:
1. If your cache instance is eligible to be upgraded, you should see the following blue banner. If you wish to proceed, select the text in the banner. :::image type="content" source="media/cache-how-to-version/blue-banner-upgrade-cache.png" alt-text="Blue banner that says you can upgrade your Redis 6 cache with additional features and commands that enhance developer productivity and ease of use. Upgrading your cache instance cannot be reversed.":::
-
-1. A dialog box will then popup notifying you that upgrading is permanent and may cause a brief connection blip. Select yes if you would like to upgrade your cache instance.
+
+1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes** if you would like to upgrade your cache instance.
:::image type="content" source="media/cache-how-to-version/dialog-version-upgrade.png" alt-text="Dialog with more information about upgrading your cache.":::
To upgrade your cache, follow these steps:
:::image type="content" source="media/cache-how-to-version/upgrade-status.png" alt-text="Overview shows status of cache being upgraded.":::
+### Upgrade using Azure CLI
+
+To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
+
+```azurecli-interactive
+az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6
+```
+
+### Upgrade using PowerShell
+
+To upgrade a cache from 4 to 6 using PowerShell, use the following command:
+
+```powershell-interactive
+Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6"
+```
+ ## FAQ ### What features aren't supported with Redis 6?
-At this time, Redis 6 does not support ACL, and geo-replication between a Redis 4 and 6 cache.
+At this time, Redis 6 doesn't support ACL, and geo-replication between a Redis 4 and 6 cache.
### Can I change the version of my cache after it's created?
-You can upgrade your existing Redis 4 caches to Redis 6, please see [here](#upgrade-an-existing-redis-4-cache-to-redis-6) for details. Please note upgrading your cache instance is permanent and you cannot downgrade your Redis 6 caches to Redis 4 caches.
+You can upgrade your existing Redis 4 caches to Redis 6, see [here](#upgrade-an-existing-redis-4-cache-to-redis-6) for details. Upgrading your cache instance is permanent and you cannot downgrade your Redis 6 caches to Redis 4 caches.
## Next Steps - To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)-- To learn more about Azure Cache for Redis features:-
-> [!div class="nextstepaction"]
-> [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-edge-hardware-center Azure Edge Hardware Center Create Order https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-edge-hardware-center/azure-edge-hardware-center-create-order.md
Previously updated : 01/03/2022 Last updated : 05/04/2022 # Customer intent: As an IT admin, I need to understand how to create an order via the Azure Edge Hardware Center.
Before you begin:
For information on how to register, go to [Register resource provider](../databox-online/azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#register-resource-providers). -- Make sure that all the other prerequisites related to the product that you are ordering are met. For example, if ordering Azure Stack Edge device, ensure that all the [Azure Stack Edge prerequisites](../databox-online/azure-stack-edge-gpu-deploy-prep.md#prerequisites) are completed.
+- Make sure that all the other prerequisites related to the product that you're ordering are met. For example, if ordering Azure Stack Edge device, ensure that all the [Azure Stack Edge prerequisites](../databox-online/azure-stack-edge-gpu-deploy-prep.md#prerequisites) are completed.
## Create an order
azure-edge-hardware-center Azure Edge Hardware Center Manage Order https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-edge-hardware-center/azure-edge-hardware-center-manage-order.md
Previously updated : 01/03/2022 Last updated : 06/01/2022 # Use the Azure portal to manage your Azure Edge Hardware Center orders
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
func new --template "Http Trigger" --name MyHttpTrigger
This example creates a Queue Storage trigger named `MyQueueTrigger`: ```
-func new --template "Queue Trigger" --name MyQueueTrigger
+func new --template "Azure Queue Storage Trigger" --name MyQueueTrigger
``` To learn more, see the [`func new` command](functions-core-tools-reference.md#func-new).
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
After your application is registered with Azure AD, you can create work items in
Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, Activity Log alerts, and Azure Log Analytics alerts in the Azure portal. To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md).
+> [!NOTE]
+> If you are using Log Serch alert notice that the query should project a ΓÇ£ComputerΓÇ¥ column with the configurtaion items list in order to have them as a part of the payload.
+ To add a webhook to an action, follow these instructions for Secure Webhook: 1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs
description: Learn how to create annotations to track deployment or other significant events with Application Insights. Last updated 07/20/2021
-ms.reviwer: casocha
++ # Release annotations for Application Insights
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Last updated 11/23/2016 ms.devlang: csharp, javascript, python
-ms.reviwer: cithomas
+ # Filter and preprocess telemetry in the Application Insights SDK
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Last updated 05/16/2022 ms.devlang: csharp, java, javascript, python -+ # Application Map: Triage Distributed Applications
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
ms.devlang: csharp Last updated 10/12/2021+ # Application Insights for ASP.NET Core applications
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Last updated 08/26/2020 ms.devlang: csharp + # Dependency Tracking in Azure Application Insights
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
ms.devlang: csharp Last updated 05/19/2021+ + # Diagnose exceptions in web apps with Application Insights Exceptions in web applications can be reported with [Application Insights](./app-insights-overview.md). You can correlate failed requests with exceptions and other events on both the client and server, so that you can quickly diagnose the causes. In this article, you'll learn how to set up exception reporting, report exceptions explicitly, diagnose failures, and more.
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
ms.devlang: csharp Last updated 05/08/2019+ + # Explore .NET/.NET Core and Python trace logs in Application Insights Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System.Diagnostics.Trace to [Azure Application Insights][start]. For Python applications, send diagnostic tracing logs using AzureLogHandler in OpenCensus Python for Azure Monitor. You can then explore and search them. Those logs are merged with the other log files from your application, so you can identify traces that are associated with each user request and correlate them with other events and exception reports.
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
ms.devlang: csharp Last updated 05/21/2020+ + # Troubleshooting no data - Application Insights for .NET/.NET Core [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
Title: Automate custom reports with Application Insights data
description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data Last updated 05/20/2019-+
+ms.pmowner: vitalyg
# Automate custom reports with Application Insights data
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
description: Learn how to set up web tests in Application Insights. Get alerts i
Last updated 06/19/2019 + # Availability alerts
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
Last updated 05/22/2019 ms.devlang: csharp -
+ms.pmowner: casocha
+ # Configuring the Application Insights SDK with ApplicationInsights.config or .xml
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Legacy table: availability
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: browserTimings
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: dependencies
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: customEvents
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: customMetrics
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: pageViews
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: performanceCounters
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |category|string|Category|string|
Legacy table: requests
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
Legacy table: exceptions
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |assembly|string|Assembly|string|
Legacy table: traces
|ApplicationInsights|Type|LogAnalytics|Type| |:|:|:|:|
-|appId|string|\_ResourceGUID|string|
+|appId|string|ResourceGUID|string|
|application_Version|string|AppVersion|string| |appName|string|\_ResourceId|string| |client_Browser|string|ClientBrowser|string|
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
+
+ Title: Application Insights SDK support guidance
+description: Support guidance for Application Insights legacy and preview SDKs
++ Last updated : 03/24/2022+++
+# Application Insights SDK support guidance
+
+Microsoft announces feature deprecations or breaking changes at least three years in advance and strives to provide a seamless process for migration to the replacement experience.
+
+The [Microsoft Azure SDK lifecycle policy](https://docs.microsoft.com/lifecycle/faq/azure) is followed when features are enhanced in a new SDK or before an SDK is designated as legacy. Microsoft strives to retain legacy SDK functionality, but newer features may not be available with older versions.
+
+> [!NOTE]
+> Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
+
+Support engineers are expected to provide SDK update guidance according to the following table, referencing the current SDK version in use and any alternatives.
+
+|Current SDK version in use |Alternative version available |Update policy for support |
+||||
+|Stable and less than one year old | Newer supported stable version | **UPDATE RECOMMENDED** |
+|Stable and more than one year old | Newer supported stable version | **UPDATE REQUIRED** |
+|Unsupported ([support policy](https://docs.microsoft.com/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
+|Preview | Stable version | **UPDATE REQUIRED** |
+|Preview | Older stable version | **UPDATE RECOMMENDED** |
+|Preview | Newer preview version, no older stable version | **UPDATE RECOMMENDED** |
+
+> [!TIP]
+> Switching to [auto-instrumentation](codeless-overview.md) eliminates the need for manual SDK updates.
+
+> [!WARNING]
+> Only commercially reasonable support is provided for Preview versions of the SDK. If a support incident requires escalation to development for further guidance, customers will be asked to use a fully supported SDK version to continue support. Commercially reasonable support does not include an option to engage Microsoft product development resources; technical workarounds may be limited or not possible.
+
+To see the current version of Application Insights SDKs and previous versions release dates, reference the [release notes](release-notes.md).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. | | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+ | [Azure SQL insights (preview)](./insights/sql-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Network Insights](./insights/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. | | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
azure-netapp-files Azure Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-policy-definitions.md
+
+ Title: Azure Policy definitions for Azure NetApp Files | Microsoft Docs
+description: Describes the Azure Policy custom definitions and built-in definitions that you can use with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ms.devlang: na
+ Last updated : 06/02/2022++
+# Azure Policy definitions for Azure NetApp Files
+
+[Azure Policy](../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
+
+Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. Policy definitions for these common use cases are already available in your Azure environment as built-ins to help you get started.
+
+The process of [creating and implementing a policy in Azure Policy](../governance/policy/tutorials/create-and-manage.md) begins with creating a (built-in or custom) [policy definition](../governance/policy/overview.md#policy-definition). Every policy definition has conditions under which it's enforced. It also has a defined [***effect***](../governance/policy/concepts/effects.md) that takes place if the conditions are met. Azure NetApp Files is supported with both Azure Policy custom and built-in policy definitions.
+
+## Custom policy definitions
+
+Azure NetApp Files supports Azure Policy. You can integrate Azure NetApp Files with Azure Policy through [creating custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md). You can find examples in [Enforce Snapshot Policies with Azure Policy](https://anfcommunity.com/2021/08/30/enforce-snapshot-policies-with-azure-policy/) and [Azure Policy now available for Azure NetApp Files](https://anfcommunity.com/2021/04/19/azure-policy-now-available-for-azure-netapp-files/).
+
+## Built-in policy definitions
+
+The Azure Policy built-in definitions for Azure NetApp Files enable organization admins to restrict creation of unsecure volumes or audit existing volumes. Each policy definition in Azure Policy has a single *effect*. That effect determines what happens when the policy rule is evaluated to match.
+
+The following effects of Azure Policy can be used with Azure NetApp Files:
+
+* *Deny* creation of non-compliant volumes
+* *Audit* existing volumes for compliance
+* *Disable* a policy definition
+
+The following Azure Policy built-in definitions are available for use with Azure NetApp Files:
+
+* *Azure NetApp Files volumes should not use NFSv3 protocol type.*
+ This policy definition disallows the use of the NFSv3 protocol type to prevent unsecure access to volumes. NFSv4.1 or NFSv4.1 with Kerberos protocol should be used to access NFS volumes to ensure data integrity and encryption.
+
+* *Azure NetApp Files volumes of type NFSv4.1 should use Kerberos data encryption.*
+ This policy definition allows only the use of Kerberos privacy (`krb5p`) security mode to ensure that data is encrypted.
+
+* *Azure NetApp Files volumes of type NFSv4.1 should use Kerberos data integrity or data privacy.*
+ This policy definition ensures that either Kerberos integrity (`krb5i`) or Kerberos privacy (`krb5p`) is selected to ensure data integrity and data privacy.
+
+* *Azure NetApp Files SMB volumes should use SMB3 encryption.*
+ This policy definition disallows the creation of SMB volumes without SMB3 encryption to ensure data integrity and data privacy.
+
+To learn how to assign a policy to resources and view compliance report, see [Assign the Policy](../storage/common/transport-layer-security-configure-minimum-version.md#assign-the-policy).
+
+## Next steps
+
+* [Azure Policy documentation](/azure/governance/policy/)
azure-netapp-files Faq Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md
Previously updated : 10/11/2021 Last updated : 06/02/2022 # Integration FAQs for Azure NetApp Files
You can mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. Yo
Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is supported in [all AVS and ANF enabled regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware,netapp).
-## Does Azure NetApp Files work with Azure Policy?
-
-Yes. Azure NetApp Files is a first-party service. It fully adheres to Azure Resource Provider standards. As such, Azure NetApp Files can be integrated into Azure Policy via *custom policy definitions*. For information about how to implement custom policies for Azure NetApp Files, see
-[Azure Policy now available for Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure/azure-policy-now-available-for-azure-netapp-files/m-p/2282258) on Microsoft Tech Community.
- ## Which Unicode Character Encoding is supported by Azure NetApp Files for the creation and display of file and directory names? Azure NetApp Files only supports file and directory names that are encoded with the UTF-8 Unicode Character Encoding format for both NFS and SMB volumes.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 05/25/2022 Last updated : 06/02/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## June 2022
+
+* [Azure Policy built-in definitions for Azure NetApp](azure-policy-definitions.md#built-in-policy-definitions)
+
+ Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
+ ## May 2022 * [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
Last updated 03/25/2022 + # Connection string in Azure SignalR Service Connection string is an important concept that contains information about how to connect to SignalR service. In this article, you'll learn the basics of connection string and how to configure it in your application.
Connection string is an important concept that contains information about how to
When an application needs to connect to Azure SignalR Service, it will need the following information:
-* The HTTP endpoint of the SignalR service instance
-* How to authenticate with the service endpoint
+- The HTTP endpoint of the SignalR service instance
+- How to authenticate with the service endpoint
+
+Connection string contains such information.
+
+## What connection string looks like
+
+A connection string consists of a series of key/value pairs separated by semicolons(;) and we use an equal sign(=) to connect each key and its value. Keys aren't case sensitive.
-Connection string contains such information. To see how a connection string looks like, you can open a SignalR service resource in Azure portal and go to "Keys" tab. You'll see two connection strings (primary and secondary) in the following format:
+For example, a typical connection string may look like this:
``` Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0; ```
-> [!NOTE]
-> Besides portal, you can also use Azure CLI to get the connection string:
->
-> ```bash
-> az signalr key list -g <resource_group> -n <resource_name>
-> ```
- You can see in the connection string, there are two main information:
-* `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource
-* `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
+- `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource
+- `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
->[!NOTE]
-> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+The following table lists all the valid names for key/value pairs in the connection string.
+
+| key | Description | Required | Default value | Example value |
+| -- | -- | -- | -- | |
+| Endpoint | The URI of your ASRS instance. | Y | N/A | https://foo.service.signalr.net |
+| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 |
+| Version | The version of given connection string. | N | 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | https://foo.bar |
+| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app |
+
+### Use AccessKey
+
+Local auth method will be used when `AuthType` is set to null.
+
+| key | Description | Required | Default value | Example value |
+| | - | -- | - | - |
+| AccessKey | The key string in base64 format for building access token usage. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+
+### Use Azure Active Directory
+
+Azure AD auth method will be used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+
+| key | Description | Required | Default value | Example value |
+| -- | | -- | - | |
+| ClientId | A guid represents an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
+| TenantId | A guid represents an organization in Azure Active Directory. | N | null | `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
+| ClientCertPath | The absolute path of a cert file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
+
+Different `TokenCredential` will be used to generate Azure AD tokens with the respect of params you have given.
+
+- `type=azure`
+
+ [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) will be used.
+
+ ```
+ Endpoint=xxx;AuthType=azure
+ ```
+
+- `type=azure.msi`
+
+ 1. User-assigned managed identity will be used if `clientId` has been given in connection string.
+
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000
+ ```
+
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+
+ 2. Otherwise system-assigned managed identity will be used.
+
+ ```
+ Endpoint=xxx;AuthType=azure.msi;
+ ```
+
+ - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+
+
+- `type=azure.app`
+
+ `clientId` and `tenantId` are required to use [Azure AD application with service principal](/azure/active-directory/develop/howto-create-service-principal-portal).
+
+ 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) will be used if `clientSecret` is given.
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientScret=******
+ ```
+
+ 2. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) will be used if `clientCertPath` is given.
+ ```
+ Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientCertPath=/path/to/cert
+ ```
+
+## How to get my connection strings
-## Other authentication types
+### From Azure portal
-Besides access key, SignalR service also supports other types of authentication methods in connection string.
+Open your SignalR service resource in Azure portal and go to `Keys` tab.
-### Azure Active Directory Application
+You'll see two connection strings (primary and secondary) in the following format:
+
+> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
+
+### From Azure CLI
+
+You can also use Azure CLI to get the connection string:
+
+```bash
+az signalr key list -g <resource_group> -n <resource_name>
+```
+
+### For using Azure AD application
You can use [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=aad`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
+To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0;
``` For more information about how to authenticate using Azure AD application, see this [article](signalr-howto-authorize-application.md).
-### Managed identity
+### For using Managed identity
You can also use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
-There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=aad` to the connection string:
+There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=azure.msi` to the connection string:
```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;Version=1.0;
``` SignalR service SDK will automatically use the identity of your app server.
SignalR service SDK will automatically use the identity of your app server.
To use user assigned identity, you also need to specify the client ID of the managed identity: ```
-Endpoint=https://<resource_name>.service.signalr.net;AuthType=aad;ClientId=<client_id>;Version=1.0;
+Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;ClientId=<client_id>;Version=1.0;
``` For more information about how to configure managed identity, see this [article](signalr-howto-authorize-managed-identity.md).
For more information about how to configure managed identity, see this [article]
> [!NOTE] > It's highly recommended to use Azure AD to authenticate with SignalR service as it's a more secure way comparing to using access key. If you don't use access key authentication at all, consider to completely disable it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access key, it's highly recommended to rotate them regularly (more information can be found [here](signalr-howto-key-rotation.md)).
+### Use connection string generator
+
+It may be cumbersome and error-prone to build connection strings manually.
+
+To avoid making mistakes, we built a tool to help you generate connection string with Azure AD identities like `clientId`, `tenantId`, etc.
+
+To use connection string generator, open your SignalR resource in Azure portal, go to `Connection strings` tab:
++
+In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
+
+> [!NOTE]
+> Everything you input on this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
+
+> [!NOTE]
+> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+ ## Client and server endpoints Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
-But in some applications there may be an additional component in front of SignalR service and all client connections need to go through that component first (to gain additional benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
+But in some applications there may be an extra component in front of SignalR service and all client connections need to go through that component first (to gain extra benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
Similarly, when server wants to make [server connections](signalr-concept-intern
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0; ```
-## Use connection string generator
-
-It may be cumbersome and error-prone to compose connection string manually. In Azure portal, there is a tool to help you generate connection string with additional information like client endpoint and auth type.
-
-To use connection string generator, open the SignalR resource in Azure portal, go to "Connection strings" tab:
--
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
-
-> [!NOTE]
-> Everything you input in this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
- ## Configure connection string in your application There are two ways to configure connection string in your application.
services.AddSignalR().AddAzureSignalR("<connection_string>");
Or you can call `AddAzureSignalR()` without any arguments, then service SDK will read the connection string from a config named `Azure:SignalR:ConnectionString` in your [config providers](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the config is usually stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
+In a local development environment, the config is stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
-* Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)
-* Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
+- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)
+- Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up config provider for those services.
In production environment, you can use other Azure services to manage config/sec
### Configure multiple connection strings
-Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
+Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections, which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
There are also two ways to configure multiple instances:
-* Through code
+- Through code
- ```cs
- services.AddSignalR().AddAzureSignalR(options =>
- {
- options.Endpoints = new ServiceEndpoint[]
- {
- new ServiceEndpoint("<connection_string_1>", name: "name_a"),
- new ServiceEndpoint("<connection_string_2>", name: "name_b", type: EndpointType.Primary),
- new ServiceEndpoint("<connection_string_3>", name: "name_c", type: EndpointType.Secondary),
- };
- });
- ```
+ ```cs
+ services.AddSignalR().AddAzureSignalR(options =>
+ {
+ options.Endpoints = new ServiceEndpoint[]
+ {
+ new ServiceEndpoint("<connection_string_1>", name: "name_a"),
+ new ServiceEndpoint("<connection_string_2>", name: "name_b", type: EndpointType.Primary),
+ new ServiceEndpoint("<connection_string_3>", name: "name_c", type: EndpointType.Secondary),
+ };
+ });
+ ```
- You can assign a name and type to each service endpoint so you can distinguish them later.
+ You can assign a name and type to each service endpoint so you can distinguish them later.
-* Through config
+- Through config
- You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
+ You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
- ```bash
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_b:primary <connection_string_2>
- dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3>
- ```
+ ```bash
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_b:primary <connection_string_2>
+ dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3>
+ ```
- You can also assign name and type to each endpoint, by using a different config name in the following format:
+ You can also assign name and type to each endpoint, by using a different config name in the following format:
- ```
- Azure:SignalR:ConnectionString:<name>:<type>
- ```
+ ```
+ Azure:SignalR:ConnectionString:<name>:<type>
+ ```
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
+
+ Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts.
+ Last updated : 05/10/2022+++
+# Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)
+
+[Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-introduction?) is an enterprise-class, high-performance, metered file storage service. The service supports the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. For more information on Azure NetApp Files, see [Azure NetApp Files](https://docs.microsoft.com/azure/azure-netapp-files/) documentation.
+
+[Azure VMware Solution](/azure/azure-vmware/introduction) supports attaching Network File System (NFS) datastores as a persistent storage option. You can create NFS datastores with Azure NetApp Files volumes and attach them to clusters of your choice. You can also create virtual machines (VMs) for optimal cost and performance.
+
+> [!IMPORTANT]
+> Azure NetApp Files datastores for Azure VMware Solution hosts is currently in public preview. This version is provided without a service-level agreement and is not recommended for production workloads. Some features may not be supported or may have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. You can also use Azure NetApp Files volumes to replicate data from on-premises or primary VMware environments for the secondary site.
+
+Create your Azure VMware Solution and create Azure NetApp Files NFS volumes in the virtual network connected to it using an ExpressRoute. Ensure there's connectivity from the private cloud to the NFS volumes created. Use those volumes to create NFS datastores and attach the datastores to clusters of your choice in a private cloud. As a native integration, no other permissions configured via vSphere are needed.
+
+The following diagram demonstrates a typical architecture of Azure NetApp Files backed NFS datastores attached to an Azure VMware Solution private cloud via ExpressRoute.
++
+## Prerequisites
+
+Before you begin the prerequisites, review the [Performance best practices](#performance-best-practices) section to learn about optimal performance of NFS datastores on Azure NetApp Files volumes.
+
+1. [Deploy Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/deploy-azure-vmware-solution?) private cloud in a configured virtual network. For more information, see [Network planning checklist](/azure/azure-vmware/tutorial-network-checklist) and [Configure networking for your VMware private cloud](https://review.docs.microsoft.com/azure/azure-vmware/tutorial-configure-networking?).
+1. Create an [NFSv3 volume for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-create-volumes) in the same virtual network as the Azure VMware Solution private cloud.
+ 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
+ 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
+
+ `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"`
+
+ `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
+ 1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For optimal performance, it's recommended to use the Ultra tier. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section.
+ 1. Create a volume with **Standard** [network features](/azure/azure-netapp-files/configure-network-features) if available for ExpressRoute FastPath connectivity.
+ 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud.
+ 1. If you're using [export policies](/azure/azure-netapp-files/azure-netapp-files-configure-export-policy) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced so if the IP isn't enabled, connectivity to datastore will be impacted.
+
+## Supported regions
+
+Azure VMware Solution currently supports the following regions: East US, Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, France Central, Germany West Central, Japan West, North Central US, North Europe, Southeast Asia, Switzerland West, UK South, UK West, US South Central, and West US. The list of supported regions will expand as the preview progresses.
+
+## Performance best practices
+
+There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes.
+
+- Create Azure NetApp Files volumes using **Standard** network features to enable optimized connectivity from Azure VMware Solution private cloud via ExpressRoute FastPath connectivity.
+- For optimized performance, choose **UltraPerformance** gateway and enable [ExpressRoute FastPath](/azure/expressroute/expressroute-howto-linkvnet-arm#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](/azure/expressroute/expressroute-about-virtual-network-gateways).
+- Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. For best performance, it's recommended to use the Ultra tier.
+- Create multiple datastores of 4-TB size for better performance. The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [Availability Zone](https://docs.microsoft.com/azure/availability-zones/az-overview#availability-zones).
+
+## Attach an Azure NetApp Files volume to your private cloud
+
+### [Portal](#tab/azure-portal)
+
+To attach an Azure NetApp Files volume to your private cloud using Portal, follow these steps:
+
+1. Sign in to the Azure portal.
+1. Select **Subscriptions** to see a list of subscriptions.
+1. From the list, select the subscription you want to use.
+1. Under Settings, select **Resource providers**.
+1. Search for **Microsoft.AVS** and select it.
+1. Select **Register**.
+1. Under **Settings**, select **Preview features**.
+ 1. Verify you're registered for both the `CloudSanExperience` and `AfnDatstoreExperience` features.
+1. Navigate to your Azure VMware Solution.
+Under **Manage**, select **Storage (preview)**.
+1. Select **Connect Azure NetApp Files volume**.
+1. In **Connect Azure NetApp Files volume**, select the **Subscription**, **NetApp account**, **Capacity pool**, and **Volume** to be attached as a datastore.
+
+ :::image type="content" source="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png" alt-text="Image shows the navigation to Connect Azure NetApp Files volume pop-up window." lightbox="media/attach-netapp-files-to-cloud/connect-netapp-files-portal-experience-1.png":::
+
+1. Verify the protocol is NFS. You'll need to verify the virtual network and subnet to ensure connectivity to the Azure VMware Solution private cloud.
+1. Under **Associated cluster**, select the **Client cluster** to associate the NFS volume as a datastore
+1. Under **Data store**, create a personalized name for your **Datastore name**.
+ 1. When the datastore is created, you should see all of your datastores in the **Storage (preview)**.
+ 2. You'll also notice that the NFS datastores are added in vCenter.
++
+### [Azure CLI](#tab/azure-cli)
+
+To attach an Azure NetApp Files volume to your private cloud using Azure CLI, follow these steps:
+
+1. Verify the subscription is registered to `CloudSanExperience` feature in the **Microsoft.AVS** namespace. If it's not already registered, then register it.
+
+ `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+
+ `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+1. The registration should take approximately 15 minutes to complete. You can also check the status.
+
+ `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state`
+1. If the registration is stuck in an intermediate state for longer than 15 minutes, unregister, then re-register the flag.
+
+ `az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+
+ `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
+1. Verify the subscription is registered to `AnfDatastoreExperience` feature in the **Microsoft.AVS** namespace. If it's not already registered, then register it.
+
+ `az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"`
+
+ `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`
+1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension.
+
+ `az extension show --name vmware`
+
+ `az extension list-versions -n vmware`
+
+ `az extension update --name vmware`
+1. If the VMware extension isn't already installed, install it.
+
+ `az extension add --name vmware`
+1. Create a datastore using an existing ANF volume in Azure VMware Solution private cloud cluster.
+
+ `az vmware datastore netapp-volume create --name MyDatastore1 --resource-group MyResourceGroup ΓÇô-cluster Cluster-1 --private-cloud MyPrivateCloud ΓÇô-volume-id /subscriptions/<Subscription Id>/resourceGroups/<Resourcegroup name>/providers/Microsoft.NetApp/netAppAccounts/<Account name>/capacityPools/<pool name>/volumes/<Volume name>`
+1. If needed, you can display the help on the datastores.
+
+ `az vmware datastore -h`
+1. Show the details of an ANF-based datastore in a private cloud cluster.
+
+ `az vmware datastore show --name ANFDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+1. List all of the datastores in a private cloud cluster.
+
+ `az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+++
+## Disconnect an Azure NetApp Files-based datastore from your private cloud
+
+You can use the instructions provided to disconnect an Azure NetApp Files-based (ANF) datastore using either Azure portal or Azure CLI. There's no maintenance window required for this operation. The disconnect action only disconnects the ANF volume as a datastore, it doesn't delete the data or the ANF volume.
+
+**Disconnect an ANF datastore using the Azure Portal**
+
+1. Select the datastore you want to disconnect from.
+1. Right-click on the datastore and select **disconnect**.
+
+**Disconnect an ANF datastore using Azure CLI**
+
+ `az vmware datastore delete --name ANFDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud`
+
+## Next steps
+
+Now that you've attached a datastore on Azure NetApp Files-based NFS volume to your Azure VMware Solution hosts, you can create your VMs. Use the following resources to learn more.
+
+- [Service levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels)
+- Datastore protection using [Azure NetApp Files snapshots](/azure/azure-netapp-files/snapshots-introduction)
+- [About ExpressRoute virtual network gateways](https://docs.microsoft.com/azure/expressroute/expressroute-about-virtual-network-gateways)
+- [Understand Azure NetApp Files backup](/azure/azure-netapp-files/backup-introduction)
+- [Guidelines for Azure NetApp Files network planning](https://docs.microsoft.com/azure/azure-netapp-files/azure-netapp-files-network-topologies)
+
+## FAQs
+
+- **Are there any special permissions required to create the datastore with the Azure NetApp Files volume and attach it onto the clusters in a private cloud?**
+
+ No other special permissions are needed. The datastore creation and attachment is implemented via Azure VMware Solution control plane.
+
+- **Which NFS versions are supported?**
+
+ NFSv3 is supported for datastores on Azure NetApp Files.
+
+- **Should Azure NetApp Files be in the same subscription as the private cloud?**
+
+ It's recommended to create the Azure NetApp Files volumes for the datastores in the same VNet that has connectivity to the private cloud.
+
+- **How many datastores are we supporting with Azure VMware Solution?**
+
+ The default limit is 8 but it can be increased up to a maximum of 256 by submitting a support ticket. To submit a support ticket, go to [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+
+- **What latencies and bandwidth can be expected from the datastores backed by Azure NetApp Files?**
+
+ We're currently validating and working on benchmarking. For now, follow the [Performance best practices](#performance-best-practices) outlined in this article.
+
+- **What are my options for backup and recovery?**
+
+ Azure NetApp Files (ANF) supports [snapshots](/azure/azure-netapp-files/azure-netapp-files-manage-snapshots) of datastores for quick checkpoints for near term recovery or quick clones. ANF backup lets you offload your ANF snapshots to Azure storage. This feature is available in public preview. Only for this technology are copies and stores-changed blocks relative to previously offloaded snapshots in an efficient format. This ability decreases Recovery Point Objective (RPO) and Recovery Time Objective (RTO) while lowering backup data transfer burden on the Azure VMware Solution service.
+
+- **How do I monitor Storage Usage?**
+
+ Use [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics) to monitor storage and performance usage for the Datastore volume and to set alerts.
+
+- **What metrics are available for monitoring?**
+
+ Usage and performance metrics are available for monitoring the Datastore volume. Replication metrics are also available for ANF datastore that can be replicated to another region using Cross Regional Replication. For more information about metrics, see [Metrics for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-metrics).
+
+- **What happens if a new node is added to the cluster, or an existing node is removed from the cluster?**
+
+ When you add a new node to the cluster, it will automatically gain access to the datastore. Removing an existing node from the cluster won't affect the datastore.
+
+- **How are the datastores charged, is there an additional charge?**
+
+ Azure NetApp Files NFS volumes that are used as datastores will be billed following the [capacity pool based billing model](/azure/azure-netapp-files/azure-netapp-files-cost-model). Billing will depend on the service level. There's no extra charge for using Azure NetApp Files NFS volumes as datastores.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Title: Concepts - Identity and access description: Learn about the identity and access concepts of Azure VMware Solution Previously updated : 07/29/2021 Last updated : 06/06/2022 # Azure VMware Solution identity concepts
-Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter Server and restricted administrator rights for NSX-T Manager.
+Azure VMware Solution private clouds are provisioned with a vCenter Server and NSX-T Manager. You'll use vCenter to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. The CloudAdmin role is used for vCenter Server and the administrator role (with restricted permissions) is used for NSX-T Manager.
## vCenter Server access and identity [!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)] > [!IMPORTANT]
-> Azure VMware Solution offers custom roles on vCenter Server but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter Server](#create-custom-roles-on-vcenter-server) section later in this article.
+> Azure VMware Solution offers custom roles on vCenter Server but currently doesn't offer them on the Azure VMware Solution portal. For more information, see the [Create custom roles on vCenter Server](#create-custom-roles-on-vcenter-server) section later in this article.
### View the vCenter privileges You can view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
-1. Sign in to the vSphere Client and go to **Menu** > **Administration**.
-
+1. Sign into the vSphere Client and go to **Menu** > **Administration**.
1. Under **Access Control**, select **Roles**.-
-1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
+1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
:::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="Screenshot showing the roles and privileges for CloudAdmin in the vSphere Client.":::
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
### Create custom roles on vCenter Server
-Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
-You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin, but you can't assign the role to any users or groups or delete the role.
+You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin. You can't assign the role to any users or groups or delete the role.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Select the **CloudAdmin** role and select the **Clone role action** icon.
- >[!NOTE]
+ >[!NOTE]
>Don't clone the **Administrator** role because you can't use it. Also, the custom role created can't be deleted by cloudadmin\@vsphere.local. 1. Provide the name you want for the cloned role. 1. Add or remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list. - #### Apply a custom role 1. Navigate to the object that requires the added permission. For example, to apply permission to a folder, navigate to **Menu** > **VMs and Templates** > **Folder Name**.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
## NSX-T Manager access and identity
->[!NOTE]
->NSX-T [!INCLUDE [nsxt-version](includes/nsxt-version.md)] is currently supported for all new private clouds.
+When a private cloud is provisioned using Azure portal, Software Defined Data Center (SDDC) management components like vCenter and NSX-T Manager are provisioned for customers.
+
+Microsoft is responsible for the lifecycle management of NSX-T appliances like NSX-T Managers and NSX-T Edges. They're responsible for bootstrapping network configuration, like creating the Tier-0 gateway.
+
+You're responsible for NSX-T software-defined networking (SDN) configuration, for example:
+
+- Network segments
+- Other Tier-1 gateways
+- Distributed firewall rules
+- Stateful services like gateway firewall
+- Load balancer on Tier-1 gateways
+
+You can access NSX-T Manager using the built-in local user "admin" assigned to **Enterprise admin** role that gives full privileges to a user to manage NSX-T. While Microsoft manages the lifecycle of NSX-T, certain operations aren't allowed by a user. Operations not allowed include editing the configuration of host and edge transport nodes or starting an upgrade. For new users, Azure VMware Solution deploys them with a specific set of permissions needed by that user. The purpose is to provide a clear separation of control between the Azure VMware Solution control plane configuration and Azure VMware Solution private cloud user.
+
+For new private cloud deployments (in US West and Australia East) starting **June 2022**, NSX-T access will be provided with a built-in local user `cloudadmin` with a specific set of permissions to use only NSX-T functionality for workloads. The new **cloudadmin** user role will be rolled out in other regions in phases.
+
+> [!NOTE]
+> Admin access to NSX-T will not be provided to users for private cloud deployments created after **June 2022**.
+
+### NSX-T cloud admin user permissions
+
+The following permissions are assigned to the **cloudadmin** user in Azure VMware Solution NSX-T.
+
+| Category | Type | Operation | Permission |
+|--|--|-||
+| Networking | Connectivity | Tier-0 Gateways<br>Tier-1 Gateways<br>Segments | Read-only<br>Full Access<br>Full Access |
+| Networking | Network Services | VPN<br>NAT<br>Load Balancing<br>Forwarding Policy<br>Statistics | Full Access<br>Full Access<br>Full Access<br>Read-only<br>Full Access |
+| Networking | IP Management | DNS<br>DHCP<br>IP Address Pools | Full Access<br>Full Access<br>Full Access |
+| Networking | Profiles | | Full Access |
+| Security | East West Security | Distributed Firewall<br>Distributed IDS and IPS<br>Identity Firewall | Full Access<br>Full Access<br>Full Access |
+| Security | North South Security | Gateway Firewall<br>URL Analysis | Full Access<br>Full Access |
+| Security | Network Introspection | | Read-only |
+| Security | Endpoint Protection | | Read-only |
+| Security | Settings | | Full Access |
+| Inventory | | | Full Access |
+| Troubleshooting | IPFIX | | Full Access |
+| Troubleshooting | Port Mirroring | | Full Access |
+| Troubleshooting | Traceflow | | Full Access |
+| System | Configuration<br>Settings<br>Settings<br>Settings | Identity firewall<br>Users and Roles<br>Certificate Management<br>User Interface Settings | Full Access<br>Full Access<br>Full Access<br>Full Access |
+| System | All other | | Read-only |
++
+You can view the permissions granted to the Azure VMware Solution CloudAdmin role using the following steps:
+
+1. Log in to the NSX-T Manager.
+1. Navigate to **Systems** > **Users and Roles** and locate **User Role Assignment**.
+1. The **Roles** column for the CloudAdmin user provides information on the NSX role-based access control (RBAC) roles assigned.
+1. Select the the **Roles** tab to view specific permissions associated with each of the NSX RBAC roles.
+1. To view **Permissions**, expand the **CloudAdmin** role and select a category like, Networking or Security.
-Use the *admin* account to access NSX-T Manager. It has full privileges and lets you create and manage Tier-1 (T1) gateways, segments (logical switches), and all services. In addition, the privileges give you access to the NSX-T Tier-0 (T0) gateway. A change to the T0 gateway could result in degraded network performance or no private cloud access. Open a support request in the Azure portal to request any changes to your NSX-T T0 gateway.
+> [!NOTE]
+> The current Azure VMware Solution with **NSX-T admin user** will eventually switch from **admin** user to **cloudadmin** user. You'll receive a notification through Azure Service Health that includes the timeline of this change so you can change the NSX-T credentials you've used for the other integration.
-
## Next steps Now that you've covered Azure VMware Solution access and identity concepts, you may want to learn about:
azure-web-pubsub Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-metrics.md
Metrics provide the running info of the service. The available metrics are:
|Connection Quota Utilization|Percent|Max / Avg|The percentage of connection connected relative to connection quota.|No Dimensions| |Inbound Traffic|Bytes|Sum|The inbound traffic of service|No Dimensions| |Outbound Traffic|Bytes|Sum|The outbound traffic of service|No Dimensions|
+|Server Load|Percent|Max / Avg|The percentage of server load|No Dimensions|
### Understand Dimensions
azure-web-pubsub Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-performance.md
One of the key benefits of using Azure Web PubSub Service is the ease of scaling
In this guide, we'll introduce the factors that affect Web PubSub upstream application performance. We'll describe typical performance in different use-case scenarios.
+## Quick evaluation using metrics
+ Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
+
+ <kbd>![Screenshot of the Server Load metric of Azure Web PubSub on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/concept-performance/server-load.png "Server Load")</kbd>
++
+ It shows the computing pressure of your Azure Web PubSub service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside Azure Web PubSub service would remain low if the Server Load is below 70%.
+
+> [!NOTE]
+> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100), you need to check [sending to small group](#small-group) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
+
+ Below are detailed concepts for evaluating performance.
## Term definitions *Inbound*: The incoming message to Azure Web PubSub Service.
The bandwidth limit is the same as that for **send to big group**.
## Next steps
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Title: Azure Backup - Archive tier overview description: Learn about Archive tier support for Azure Backup. Previously updated : 05/12/2022 Last updated : 06/06/2022
Archive tier supports the following workloads:
| Azure Virtual Machines | Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. <br><br> Age >= 3 months in Vault-standard tier <br><br> Retention left >= 6 months. <br><br> No active daily and weekly dependencies. | | SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | Only full recovery points. Logs and differentials aren't supported. <br><br> Age >= 45 days in Vault-standard tier. <br><br> Retention left >= 6 months. <br><br> No dependencies. |
+A recovery point becomes archivable only if all the above conditions are met.
+ >[!Note] >- Archive tier support for Azure Virtual Machines, SQL Servers in Azure VMs and SAP HANA in Azure VM is now generally available in multiple regions. For the detailed list of supported regions, see the [support matrix](#support-matrix). >- Archive tier support for Azure Virtual Machines for the remaining regions is in limited public preview. To sign up for limited public preview, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
If you delete recovery points that haven't stayed in archive for a minimum of 18
Stop protection and delete data deletes all recovery points. For recovery points in archive that haven't stayed for a duration of 180 days in archive tier, deletion of recovery points leads to early deletion cost.
-## Archive Tier pricing
+## Archive tier pricing
You can view the Archive tier pricing from our [pricing page](azure-backup-pricing.md).
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Previously updated : 02/25/2022 Last updated : 06/05/2022 #Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
In this quickstart, you deploy Bastion from your virtual machine settings in the
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
-1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion**.
+1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Create Azure Bastion using defaults**.
:::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png"::: 1. Bastion begins deploying. This can take around 10 minutes to complete.
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/override-sku.md
Setting the **allowModelOverride** property to `true` here will update the cloud
"packageUrl": "[parameters('packageSasUri')]", "configurationUrl": "[parameters('configurationSasUri')]", "upgradeMode": "[parameters('upgradeMode')]",
- ΓÇ£allowModelOverrideΓÇ¥ : true,
+ "allowModelOverride": true,
"roleProfile": { "roles": [ {
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Use the following table to determine supported styles and roles for each neural
|zh-CN-XiaomoNeural|`affectionate`, `angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `envious`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported| |zh-CN-XiaoruiNeural|`angry`, `calm`, `fearful`, `sad`|Supported|| |zh-CN-XiaoshuangNeural|`chat`|Supported||
-|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `sad`, `serious`|Supported||
+|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `disgruntled`, `fearful`, `gentle`, `lyrical`, `newscast`, `poetry-reading`, `sad`, `serious`|Supported||
|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported| |zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported| |zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported||
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
|`style="newscast"`|Expresses a formal and professional tone for narrating news.| |`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.| |`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
+|`style="poetry-reading"`|Expresses an emotional and rhythmic tone while reading a poem.|
|`style="sad"`|Expresses a sorrowful tone.| |`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.| |`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Azure Cognitive Service for Language provides the following features:
> | [Custom NER](custom-named-entity-recognition/overview.md) | Build an AI model to extract custom entity categories, using unstructured text that you provide. | * [Language Studio](custom-named-entity-recognition/quickstart.md?pivots=language-studio) <br> * [REST API](custom-named-entity-recognition/quickstart.md?pivots=rest-api) | > | [Analyze sentiment and opinions](sentiment-opinion-mining/overview.md) | This pre-configured feature provides sentiment labels (such as "*negative*", "*neutral*" and "*positive*") for sentences and documents. This feature can additionally provide granular information about the opinions related to words that appear in the text, such as the attributes of products or services. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](sentiment-opinion-mining/quickstart.md) <br> * [Docker container](sentiment-opinion-mining/how-to/use-containers.md) > |[Language detection](language-detection/overview.md) | This pre-configured feature evaluates text, and determines the language it was written in. It returns a language identifier and a score that indicates the strength of the analysis. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](language-detection/quickstart.md) <br> * [Docker container](language-detection/how-to/use-containers.md) |
-> |[Custom text classification (preview)](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
+> |[Custom text classification](custom-classification/overview.md) | Build an AI model to classify unstructured text into custom classes that you define. | * [Language Studio](custom-classification/quickstart.md?pivots=language-studio)<br> * [REST API](language-detection/quickstart.md?pivots=rest-api) |
> | [Document summarization (preview)](summarization/overview.md?tabs=document-summarization) | This pre-configured feature extracts key sentences that collectively convey the essence of a document. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](summarization/quickstart.md) | > | [Conversation summarization (preview)](summarization/overview.md?tabs=conversation-summarization) | This pre-configured feature summarizes issues and summaries in transcripts of customer-service conversations. | * [Language Studio](language-studio.md) <br> * [REST API](summarization/quickstart.md?tabs=rest-api) |
-> | [Conversational language understanding (preview)](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md)
+> | [Conversational language understanding](conversational-language-understanding/overview.md) | Build an AI model to bring the ability to understand natural language into apps, bots, and IoT devices. | * [Language Studio](conversational-language-understanding/quickstart.md)
> | [Question answering](question-answering/overview.md) | This pre-configured feature provides answers to questions extracted from text input, using semi-structured content such as: FAQs, manuals, and documents. | * [Language Studio](language-studio.md) <br> * [REST API and client-library](question-answering/quickstart/sdk.md) | > | [Orchestration workflow](orchestration-workflow/overview.md) | Train language models to connect your applications to question answering, conversational language understanding, and LUIS | * [Language Studio](orchestration-workflow/quickstart.md?pivots=language-studio) <br> * [REST API](orchestration-workflow/quickstart.md?pivots=rest-api) |
cognitive-services Responsible Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-use-cases.md
For more information, see:
|**Rewards**| A measure of how the user responded to the Rank API's returned reward action ID, as a score between 0 to 1. The 0 to 1 value is set by your business logic, based on how the choice helped achieve your business goals of personalization. The learning loop doesn't store this reward as individual user history. | |**Exploration**| The Personalizer service is exploring when, instead of returning the best action, it chooses a different action for the user. The Personalizer service avoids drift, stagnation, and can adapt to ongoing user behavior by exploring. |
-For more information, and additional key terms, please refer to the [Personalizer Terminology](/terminology.md) and [conceptual documentation](how-personalizer-works.md).
+For more information, and additional key terms, please refer to the [Personalizer Terminology](terminology.md) and [conceptual documentation](how-personalizer-works.md).
## Example use cases
communication-services Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/government.md
+
+ Title: Azure Communication Services in Azure Government
+description: Learn about using Azure Communication Services in US Government regions
++++ Last updated : 06/02/2022++++++++
+# Azure Communication Services for US Government
++
+Azure Communication Services can be used within [Azure Government](https://azure.microsoft.com/global-infrastructure/government/) to provide compliance with US government requirements for cloud services. In addition to enjoying the features and capabilities of Messaging, Voice and Video calling, developers benefit from the following features that are unique to Azure Government:
+- Your personal data is logically segregated from customer content in the commercial Azure cloud.
+- Your resourceΓÇÖs customer content is stored within the United States.
+- Access to your organization's customer content is restricted to screened Microsoft personnel.
+
+You can find more information about the Office 365 Government ΓÇô GCC High offering for US Government customers at [Office 365 Government plans](https://products.office.com/government/compare-office-365-government-plans). Please see [eligibility requirements](https://azure.microsoft.com/global-infrastructure/government/how-to-buy/) for Azure Government.
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
+
+ Title: How to verify if your application is running in a web browser supported by Azure Communication Services
+description: Learn how to get current browser environment details using the Azure Communication Services Calling SDK for JavaScript
++ Last updated : 05/27/2022++++
+# How to verify if your application is running in a web browser supported by Azure Communication Services
+
+There are many different browsers available in the market today, but not all of them can properly support audio and video calling. To determine if the browser your application is running on is a supported browser you can use the `getEnvironmentInfo` to check for browser support.
+
+A `CallClient` instance is required for this operation. When you have a `CallClient` instance, you can use the `getEnvironmentInfo` method on the `CallClient` instance to obtain details about the current environment of your app:
++
+```javascript
+const callClient = new CallClient(options);
+const environmentInfo = await callClient.getEnvironmentInfo();
+```
+
+The `getEnvironmentInfo` method asynchronously returns an object of type `EnvironmentInfo`.
+
+- The `EnvironmentInfo` type is defined as:
+
+```javascript
+{
+ environment: Environment;
+ isSupportedPlatform: boolean;
+ isSupportedBrowser: boolean;
+ isSupportedBrowserVersion: boolean;
+ isSupportedEnvironment: boolean;
+}
+```
+- The `Environment` type within the `EnvironmentInfo` type is defined as:
+
+```javascript
+{
+ platform: string;
+ browser: string;
+ browserVersion: string;
+}
+```
+
+A supported environment is a combination of an operating system, a browser, and the minimum version required for that browser.
container-registry Tasks Agent Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md
Create an agent pool by using the [az acr agentpool create][az-acr-agentpool-cre
```azurecli az acr agentpool create \
+ --registry MyRegistry \
--name myagentpool \ --tier S2 ```
Scale the pool size up or down with the [az acr agentpool update][az-acr-agentpo
```azurecli az acr agentpool update \
+ --registry MyRegistry \
--name myagentpool \ --count 2 ```
subnetId=$(az network vnet subnet show \
--query id --output tsv) az acr agentpool create \
+ --registry MyRegistry \
--name myagentpool \ --tier S2 \ --subnet-id $subnetId
Queue a quick task on the agent pool by using the [az acr build][az-acr-build] c
```azurecli az acr build \
+ --registry MyRegistry \
--agent-pool myagentpool \ --image myimage:mytag \ --file Dockerfile \
For example, create a scheduled task on the agent pool with [az acr task create]
```azurecli az acr task create \
+ --registry MyRegistry \
--name mytask \ --agent-pool myagentpool \ --image myimage:mytag \
To verify task setup, run [az acr task run][az-acr-task-run]:
```azurecli az acr task run \
+ --registry MyRegistry \
--name mytask ```
To find the number of runs currently scheduled on the agent pool, run [az acr ag
```azurecli az acr agentpool show \
+ --registry MyRegistry \
--name myagentpool \ --queue-count ```
cosmos-db Account Databases Containers Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/account-databases-containers-items.md
Title: Azure Cosmos DB resource model description: This article describes Azure Cosmos DB resource model which includes the Azure Cosmos account, database, container, and the items. It also covers the hierarchy of these elements in an Azure Cosmos account. --+++ Last updated 07/12/2021-- # Azure Cosmos DB resource model
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
Last updated 03/24/2022 -+ # What is Azure Cosmos DB analytical store?
cosmos-db Attachments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/attachments.md
Last updated 08/07/2020-+ # Azure Cosmos DB Attachments
cosmos-db Audit Restore Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md
Last updated 04/18/2022 -+ # Audit the point in time restore action for continuous backup mode in Azure Cosmos DB
cosmos-db Automated Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md
Last updated 08/26/2021-+
cosmos-db Bulk Executor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/bulk-executor-overview.md
Last updated 05/28/2019 -+ # Azure Cosmos DB bulk executor library overview
cosmos-db Apache Cassandra Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/apache-cassandra-consistency-mapping.md
Last updated 03/24/2022-+ # Apache Cassandra and Azure Cosmos DB Cassandra API consistency levels
cosmos-db Cassandra Adoption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-adoption.md
Last updated 03/24/2022-+
cosmos-db Cassandra Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-introduction.md
Title: Introduction to the Azure Cosmos DB Cassandra API
description: Learn how you can use Azure Cosmos DB to "lift-and-shift" existing applications and build new applications by using the Cassandra drivers and CQL -+
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
Title: Apache Cassandra features supported by Azure Cosmos DB Cassandra API
description: Learn about the Apache Cassandra feature support in Azure Cosmos DB Cassandra API -+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Cassandra API description: Azure CLI Samples for Azure Cosmos DB Cassandra API-+ Last updated 02/21/2022-++
cosmos-db Connect Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/connect-spark-configuration.md
Title: Working with Azure Cosmos DB Cassandra API from Spark
description: This article is the main page for Cosmos DB Cassandra API integration from Spark. -+
cosmos-db Create Account Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/create-account-java.md
Title: 'Tutorial: Build Java app to create Azure Cosmos DB Cassandra API account
description: This tutorial shows how to create a Cassandra API account, add a database (also called a keyspace), and add a table to that account by using a Java application. -+
cosmos-db Load Data Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/load-data-table.md
Last updated 05/20/2019 -+ ms.devlang: java #Customer intent: As a developer, I want to build a Java application to load data to a Cassandra API table in Azure Cosmos DB so that customers can store and manage the key/value data and utilize the global distribution, elastic scaling, multi-region , and other capabilities offered by Azure Cosmos DB.
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Cassandra API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Cassandra API.-+ Last updated 9/13/2021-++ # Manage Azure Cosmos DB Cassandra API resources using Bicep
cosmos-db Migrate Data Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-arcion.md
Last updated 04/04/2022-+ # Migrate data from Cassandra to Azure Cosmos DB Cassandra API account using Arcion
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-striim.md
Last updated 12/09/2021 -+ # Migrate data to Azure Cosmos DB Cassandra API account using Striim
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
Title: 'Migrate your data to a Cassandra API account in Azure Cosmos DB- Tutoria
description: In this tutorial, learn how to copy data from Apache Cassandra to a Cassandra API account in Azure Cosmos DB. -+
cosmos-db Oracle Migrate Cosmos Db Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/oracle-migrate-cosmos-db-arcion.md
Last updated 04/04/2022-+ # Migrate data from Oracle to Azure Cosmos DB Cassandra API account using Arcion
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Cassandra API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Cassandra API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Cassandra API
cosmos-db Query Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/query-data.md
description: This tutorial shows how to query user data from an Azure Cosmos DB
-+ Last updated 09/24/2018
cosmos-db Secondary Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md
Last updated 09/03/2021 -+ # Secondary indexing in Azure Cosmos DB Cassandra API
cosmos-db Spark Aggregation Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-aggregation-operations.md
Title: Aggregate operations on Azure Cosmos DB Cassandra API tables from Spark
description: This article covers basic aggregation operations against Azure Cosmos DB Cassandra API tables from Spark -+
cosmos-db Spark Create Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-create-operations.md
Title: Create or insert data into Azure Cosmos DB Cassandra API from Spark
description: This article details how to insert sample data into Azure Cosmos DB Cassandra API tables -+
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
Title: Access Azure Cosmos DB Cassandra API from Azure Databricks
description: This article covers how to work with Azure Cosmos DB Cassandra API from Azure Databricks. -+
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
Title: DDL operations in Azure Cosmos DB Cassandra API from Spark
description: This article details keyspace and table DDL operations against Azure Cosmos DB Cassandra API from Spark. -+
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
Title: Delete operations on Azure Cosmos DB Cassandra API from Spark
description: This article details how to delete data in tables in Azure Cosmos DB Cassandra API from Spark -+
cosmos-db Spark Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-hdinsight.md
Title: Access Azure Cosmos DB Cassandra API on YARN with HDInsight
description: This article covers how to work with Azure Cosmos DB Cassandra API from Spark on YARN with HDInsight. -+
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
titleSufix: Azure Cosmos DB
description: This article describes how to read data from Cassandra API tables in Azure Cosmos DB. -+
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
Title: Table copy operations on Azure Cosmos DB Cassandra API from Spark
description: This article details how to copy data between tables in Azure Cosmos DB Cassandra API -+
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
Title: Upsert data into Azure Cosmos DB Cassandra API from Spark
description: This article details how to upsert into tables in Azure Cosmos DB Cassandra API from Spark -+
cosmos-db Templates Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/templates-samples.md
Title: Resource Manager templates for Azure Cosmos DB Cassandra API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Cassandra API. -+ Last updated 10/14/2020-++ # Manage Azure Cosmos DB Cassandra API resources using Azure Resource Manager templates
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Title: Choose an API in Azure Cosmos DB description: Learn how to choose between SQL/Core, MongoDB, Cassandra, Gremlin, and table APIs in Azure Cosmos DB based on your workload requirements.--+++ Last updated 12/08/2021
cosmos-db Common Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-cli-samples.md
Last updated 02/22/2022--+++
cosmos-db Common Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/common-powershell-samples.md
description: Azure PowerShell Samples common to all Azure Cosmos DB APIs
Last updated 05/02/2022--+++
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/compliance.md
Last updated 09/11/2021-+ # Compliance in Azure Cosmos DB
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md
Title: Azure Cosmos DB service quotas description: Azure Cosmos DB service quotas and default limits on different resource types.--+++ Last updated 05/30/2022
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
Last updated 12/09/2021 -+
cosmos-db Conflict Resolution Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/conflict-resolution-policies.md
Title: Conflict resolution types and resolution policies in Azure Cosmos DB description: This article describes the conflict categories and conflict resolution policies in Azure Cosmos DB.-+ Last updated 04/20/2020--++ # Conflict types and resolution policies when using multiple write regions
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Title: Consistency levels in Azure Cosmos DB description: Azure Cosmos DB has five consistency levels to help balance eventual consistency, availability, and latency trade-offs.--+++ Last updated 02/17/2022
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Last updated 04/06/2022 -+
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
Last updated 02/28/2022 -+
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
Last updated 03/02/2022-+
The `RestoreParameters` resource contains the restore operation details includin
||| |restoreMode | The restore mode should be *PointInTime* | |restoreSource | The instanceId of the source account from which the restore will be initiated. |
-|restoreTimestampInUtc | Point in time in UTC to which the account should be restored to. |
+|restoreTimestampInUtc | Point in time in UTC to restore the account. |
|databasesToRestore | List of `DatabaseRestoreResource` objects to specify which databases and containers should be restored. Each resource represents a single database and all the collections under that database, see the [restorable SQL resources](#restorable-sql-resources) section for more details. If this value is empty, then the entire account is restored. | |gremlinDatabasesToRestore | List of `GremlinDatabaseRestoreResource` objects to specify which databases and graphs should be restored. Each resource represents a single database and all the graphs under that database. See the [restorable Gremlin resources](#restorable-graph-resources) section for more details. If this value is empty, then the entire account is restored. | |tablesToRestore | List of `TableRestoreResource` objects to specify which tables should be restored. Each resource represents a table under that database, see the [restorable Table resources](#restorable-table-resources) section for more details. If this value is empty, then the entire account is restored. |
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp | The time in UTC when the database is created or deleted. | | ownerId | The name of the SQL database. | | ownerResourceId | The resource ID of the SQL database|
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event isn't initiated by the user</li></ul> |
| database |The properties of the SQL database at the time of the event| To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list) article.
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp | The time in UTC when this container event happened.| | ownerId| The name of the SQL container.| | ownerResourceId | The resource ID of the SQL container.|
-| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event isn't initiated by the user</li></ul> |
| container | The properties of the SQL container at the time of the event.| To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list) article.
Each resource contains information of a mutation event such as creation and dele
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the MongoDB database. | | ownerResourceId | The resource ID of the MongoDB database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-databases/list) article.
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp |The time in UTC when this collection event happened. | | ownerId| The name of the MongoDB collection. | | ownerResourceId | The resource ID of the MongoDB collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user</li></ul> |
+| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user</li></ul> |
-To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
+To get a list of all container mutations under the same database see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list) article.
### Restorable MongoDB resources
Each resource contains information about a mutation event, such as a creation an
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the Graph database. | | ownerResourceId | The resource ID of the Graph database. |
-| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user. </li></ul> |
+| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user. </li></ul> |
-To get a event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
+To get an event feed of all mutations on the Gremlin database for the account, see theΓÇ»[Restorable Graph Databases - List]( /rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-databases/list) article.
### Restorable Graphs
Each resource contains information of a mutation event such as creation and dele
| eventTimestamp |The time in UTC when this collection event happened. | | ownerId| The name of the Graph collection. | | ownerResourceId | The resource ID of the Graph collection. |
-| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user.</li></ul> |
+| operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: Graph creation event</li><li>Delete: Graph deletion event</li><li>Replace: Graph modification event</li><li>SystemOperation: collection modification event triggered by the system. This event isn't initiated by the user.</li></ul> |
To get a list of all container mutations under the same database, see graph [Restorable Graphs - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-gremlin-graphs/list) article. ### Restorable Table resources
-Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API does not specify an explicit database.
+Lists all the restorable Azure Cosmos DB Tables available for a specific database account at a given time and location. Note the Table API doesn't specify an explicit database.
|Property Name |Description | ||| | TableNames | The list of Table containers under this account. |
-To get a list of Table that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
+To get a list of tables that exist on the account at the given timestamp and location, see [Restorable Table Resources - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-table-resources/list) article.
### Restorable Table
Each resource contains information of a mutation event such as creation and dele
|eventTimestamp| The time in UTC when this database event happened.| | ownerId| The name of the Table database. | | ownerResourceId | The resource ID of the Table resource. |
-| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
+| operationType | The operation type of this Table event. Here are the possible values:<br/><ul><li> Create: Table creation event</li><li> Delete: Table deletion event</li><li> Replace: Table modification event</li><li> SystemOperation: database modification event triggered by the system. This event isn't initiated by the user </li></ul> |
To get a list of all table mutations under the same database, see [Restorable Table - List](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/restorable-tables/list) article.
cosmos-db Cosmosdb Migrationchoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cosmosdb-migrationchoices.md
The following factors determine the choice of the migration tool:
If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). * If you are migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md).
+>[!IMPORTANT]
+> The [Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator) is an open-source tool for live container migrations that implements change feed and bulk support. However, please note that the user interface application code for this tool is not supported or actively maintained by Microsoft. For Azure Cosmos DB SQL API live container migrations, we recommend using the Spark Connector + Change Feed as illustrated in the [sample](https://github.com/Azure/azure-sdk-for-jav) is fully supported by Microsoft.
+ |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[Data Migration Tool](import-data.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;MongoDB<br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;AWS DynamoDB<br/>&bull;Azure Blob Storage|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Tables API<br/>&bull;JSON Files |&bull; Easy to set up and supports multiple sources. <br/>&bull; Not suitable for large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| &bull;JSON/CSV Files<br/>&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;MongoDB <br/>&bull;SQL Server<br/>&bull;Table Storage<br/>&bull;Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB API for MongoDB<br/>&bull;JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |&bull; Easy to set up and supports multiple sources.<br/>&bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>&bull; Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./create-sql-api-spark.md)|Azure Cosmos DB SQL API. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
+|Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB SQL API. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB SQL API. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| &bull; Makes use of the Azure Cosmos DB bulk executor library. <br/>&bull; Suitable for large datasets. <br/>&bull; Needs a custom Spark setup. <br/>&bull; Spark is sensitive to schema inconsistencies and this can be a problem during migration. |
|Offline|[Custom tool with Cosmos DB bulk executor library](migrate-cosmosdb-data.md)| The source depends on your custom code | Azure Cosmos DB SQL API| &bull; Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>&bull; Suitable for very large datasets (10 TB+). <br/>&bull; Requires custom setup of this tool running as an App Service. | |Online|[Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Easy to set up. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Not suitable for large datasets. <br/>&bull; Does not capture deletes from the source container. | |Online|[Custom Migration Service using ChangeFeed](https://github.com/Azure-Samples/azure-cosmosdb-live-data-migrator)| Azure Cosmos DB SQL API | Azure Cosmos DB SQL API| &bull; Provides progress tracking. <br/>&bull; Works only if the source is an Azure Cosmos DB container. <br/>&bull; Works for larger datasets as well.<br/>&bull; Requires the user to set up an App Service to host the Change feed processor. <br/>&bull; Does not capture deletes from the source container.|
If you need help with capacity planning, consider reading our [guide to estimati
|Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB Cassandra API| &bull; Easy to set up. <br/>&bull; Not suitable for large datasets. <br/>&bull; Works only when the source is a Cassandra table.|
-|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/>&bull;Azure Cosmos DB Cassandra API| Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | &bull;Apache Cassandra<br/> | Azure Cosmos DB Cassandra API | &bull; Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>&bull; Needs configuration with a custom retry policy to handle throttles.|
+|Online|[Dual-write proxy + Spark](cassandr)| &bull;Apache Cassandra<br/>|&bull;Azure Cosmos DB Cassandra API <br/>| &bull; Supports larger datasets, but careful attention required for setup and validation. <br/>&bull; Open-source tools, no purchase required.|
|Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| &bull;Oracle<br/>&bull;Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|&bull;Azure Cosmos DB SQL API<br/>&bull;Azure Cosmos DB Cassandra API <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| &bull; Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>&bull; Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>&bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| |Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|&bull;Oracle<br/>&bull;Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB Cassandra API. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | &bull; Supports larger datasets. <br/>&bull; Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.|
For APIs other than the SQL API, Mongo API and the Cassandra API, there are vari
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) * Learn more by trying out the sample applications consuming the bulk executor library in [.NET](bulk-executor-dot-net.md) and [Java](bulk-executor-java.md). * The bulk executor library is integrated into the Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./create-sql-api-spark.md) article.
-* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
+* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations.
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
Last updated 04/05/2021 -+
cosmos-db Emulator Command Line Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-command-line-parameters.md
Title: Command-line and PowerShell reference for Azure Cosmos DB Emulator
description: Learn the command-line parameters for Azure Cosmos DB Emulator, how to control the emulator with PowerShell, and how to change the number of containers that you can create within the emulator. --+++ Last updated 09/17/2020
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
Last updated 04/08/2022 -+ # Get the latest restorable timestamp for continuous backup accounts
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Gremlin API description: Azure CLI Samples for Azure Cosmos DB Gremlin API-+ Last updated 02/21/2022-++
cosmos-db Graph Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-modeling-tools.md
Last updated 05/25/2021-+ # Third-party data modeling tools for Azure Cosmos DB graph data
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Gremlin API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Gremlin API. -+ Last updated 9/13/2021-++ # Manage Azure Cosmos DB Gremlin API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Gremlin API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Gremlin API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Gremlin API
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/resource-manager-template-samples.md
Title: Resource Manager templates for Azure Cosmos DB Gremlin API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Gremlin API. -+ Last updated 10/14/2020-++ # Manage Azure Cosmos DB Gremlin API resources using Azure Resource Manager templates
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/tutorial-query-graph.md
Last updated 02/16/2022-+ ms.devlang: csharp
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Title: High availability in Azure Cosmos DB description: This article describes how to build a highly available solution using Cosmos DB-+ Last updated 02/24/2022---++ # Achieve high availability with Cosmos DB
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Title: Pricing model of Azure Cosmos DB description: This article explains the pricing model of Azure Cosmos DB and how it simplifies your cost management and cost planning.--+++ Last updated 03/24/2022 - # Pricing model in Azure Cosmos DB
The pricing model of Azure Cosmos DB simplifies the cost management and planning
> > [!VIDEO https://aka.ms/docs.how-pricing-works] -- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you are using.
+- **Database operations**: The way you get charged for your database operations depends on the type of Azure Cosmos account you're using.
- - **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there is a minimum throughput required on a database or a container to guarantee the SLAs). You are billed hourly for the maximum provisioned throughput for a given hour.
+ - **Provisioned Throughput**: [Provisioned throughput](set-throughput.md) (also called reserved throughput) provides high performance at any scale. You specify the throughput that you need in [Request Units](request-units.md) per second (RU/s), and Azure Cosmos DB dedicates the resources required to provide the configured throughput. You can [provision throughput on either a database or a container](set-throughput.md). Based on your workload needs, you can scale throughput up/down at any time or use [autoscale](provision-throughput-autoscale.md) (although there's a minimum throughput required on a database or a container to guarantee the SLAs). You're billed hourly for the maximum provisioned throughput for a given hour.
> [!NOTE] > Because the provisioned throughput model dedicates resources to your container or database, you will be charged for the throughput you have provisioned even if you don't run any workloads.
- - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the amount of Request Units that has been consumed by your database operations.
+ - **Serverless**: In [serverless](serverless.md) mode, you don't have to provision any throughput when creating resources in your Azure Cosmos account. At the end of your billing period, you get billed for the number of Request Units that has been consumed by your database operations.
-- **Storage**: You are billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You are billed only for the storage you consume.
+- **Storage**: You're billed a flat rate for the total amount of storage (in GBs) consumed by your data and indexes for a given hour. Storage is billed on a consumption basis, so you don't have to reserve any storage in advance. You're billed only for the storage you consume.
The pricing model in Azure Cosmos DB is consistent across all APIs. For more information, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/), [Understanding your Azure Cosmos DB bill](understand-your-bill.md) and [How Azure Cosmos DB pricing model is cost-effective for customers](total-cost-ownership.md).
-If you deploy your Azure Cosmos DB account to a non-government region in the US, there is a minimum price for both database and container-based throughput in provisioned throughput mode. There is no minimum price in serverless mode. The pricing varies depending on the region you are using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
+If you deploy your Azure Cosmos DB account to a non-government region in the US, there's a minimum price for both database and container-based throughput in provisioned throughput mode. There's no minimum price in serverless mode. The pricing varies depending on the region you're using, see the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for latest pricing information.
## Try Azure Cosmos DB for free Azure Cosmos DB offers many options for developers to it for free. These options include:
-* **Azure Cosmos DB free tier**: Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free, for the lifetime of the account. You can have up to one free tier account per Azure subscription and must opt-in when creating the account. To learn more, see how to [create a free tier account](free-tier.md) article.
+* **Azure Cosmos DB free tier**: Azure Cosmos DB free tier makes it easy to get started, develop and test your applications, or even run small production workloads for free. When free tier is enabled on an account, you'll get the first 1000 RU/s and 25 GB of storage in the account free, for the lifetime of the account. You can have up to one free tier account per Azure subscription and must opt in when creating the account. To learn more, see how to [create a free tier account](free-tier.md) article.
* **Azure free account**: Azure offers a [free tier](https://azure.microsoft.com/free/) that gives you $200 in Azure credits for the first 30 days and a limited quantity of free services for 12 months. For more information, see [Azure free account](../cost-management-billing/manage/avoid-charges-free-account.md). Azure Cosmos DB is a part of Azure free account. Specifically for Azure Cosmos DB, this free account offers 25-GB storage and 400 RU/s of provisioned throughput for the entire year. * **Try Azure Cosmos DB for free**: Azure Cosmos DB offers a time-limited experience by using try Azure Cosmos DB for free accounts. You can create an Azure Cosmos DB account, create database and collections and run a sample application by using the Quickstarts and tutorials. You can run the sample application without subscribing to an Azure account or using your credit card. [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) offers Azure Cosmos DB for one month, with the ability to renew your account any number of times.
-* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you are satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information, see [Using Azure Cosmos DB for development and testing](local-emulator.md) for more details.
+* **Azure Cosmos DB emulator**: Azure Cosmos DB emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Emulator is offered at no cost and with high fidelity to the cloud service. Using Azure Cosmos DB emulator, you can develop and test your applications locally, without creating an Azure subscription or incurring any costs. You can develop your applications by using the emulator locally before going into production. After you're satisfied with the functionality of the application against the emulator, you can switch to using the Azure Cosmos DB account in the cloud and significantly save on cost. For more information about dev/test, see [using Azure Cosmos DB for development and testing](local-emulator.md).
## Pricing with reserved capacity
-Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for a period of one year or three years and you get a discount on the throughput provisioned.
+Azure Cosmos DB [reserved capacity](cosmos-db-reserved-capacity.md) helps you save money when using the provisioned throughput mode by pre-paying for Azure Cosmos DB resources for either one year or three years. You can significantly reduce your costs with one-year or three-year upfront commitments and save between 20-65% discounts when compared to the regular pricing. Azure Cosmos DB reserved capacity helps you lower costs by pre-paying for the provisioned throughput (RU/s) for one year or three years and you get a discount on the throughput provisioned.
-Reserved capacity provides a billing discount and does not affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](cosmos-db-reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
+Reserved capacity provides a billing discount and doesn't affect the runtime state of your Azure Cosmos DB resources. Reserved capacity is available consistently to all APIs, which includes MongoDB, Cassandra, SQL, Gremlin, and Azure Tables and all regions worldwide. You can learn more about reserved capacity in [Prepay for Azure Cosmos DB resources with reserved capacity](cosmos-db-reserved-capacity.md) article and buy reserved capacity from the [Azure portal](https://portal.azure.com/).
## Next steps
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
Title: Learn how to manage database accounts in Azure Cosmos DB description: Learn how to manage Azure Cosmos DB resources by using the Azure portal, PowerShell, CLI, and Azure Resource Manager templates-+ Last updated 09/13/2021-++ # Manage an Azure Cosmos account using the Azure portal
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
Title: Move an Azure Cosmos DB account to another region description: Learn how to move an Azure Cosmos DB account to another region.-+ Last updated 03/15/2022-++ # Move an Azure Cosmos DB account to another region
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Last updated 02/16/2022 -+ # Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sql-res
## Initialize the SDK with Azure AD
-To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD (AAD) token on behalf of the identity you wish to use.
+To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD token on behalf of the identity you wish to use.
The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of Azure AD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
cosmos-db Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md
Title: 'Tutorial: Database migration tool for Azure Cosmos DB' description: 'Tutorial: Learn how to use the open-source Azure Cosmos DB data migration tools to import data to Azure Cosmos DB from various sources including MongoDB, SQL Server, Table storage, Amazon DynamoDB, CSV, and JSON files. CSV to JSON conversion.'-+++ Last updated 08/26/2021-- + # Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Introduction to Azure Cosmos DB description: Learn about Azure Cosmos DB. This globally distributed multi-model database is built for low latency, elastic scalability, high availability, and offers native support for NoSQL data.--+++ Last updated 08/26/2021- adobe-target: true
cosmos-db Large Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/large-partition-keys.md
Title: Create Azure Cosmos containers with large partition key description: Learn how to create a container in Azure Cosmos DB with large partition key using Azure portal and different SDKs. -+ Last updated 12/8/2019-++
cosmos-db Latest Restore Timestamp Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/latest-restore-timestamp-continuous-backup.md
Last updated 04/08/2022 -+ # Latest restorable timestamp for Azure Cosmos DB accounts with continuous backup mode
You can use latest restorable timestamp in the following use cases:
* You can get the latest restorable timestamp for a container, database, or an account and use it to trigger the restore. This is the latest timestamp up to which all the data of the specified resource or all its underlying resources has been successfully backed up.
-* You can use this API to identify that your data has been successfully backed up before deleting the account. If the timestamp returned by this API is less than the last write timestamp, then it means that there is some data that has not been backed up yet. In such case, you must call this API until the timestamp becomes equal to or greater than the last write timestamp. If an account exists in multiple locations, you must get the latest restorable timestamp in all the locations to make sure that data has been backed up in all regions before deleting the account.
+* You can use this API to identify that your data has been successfully backed up before deleting the account. If the timestamp returned by this API is less than the last write timestamp, then it means that there's some data that hasn't been backed up yet. In such case, you must call this API until the timestamp becomes equal to or greater than the last write timestamp. If an account exists in multiple locations, you must get the latest restorable timestamp in all the locations to make sure that data has been backed up in all regions before deleting the account.
* You can use this API to monitor that your data is being backed up on time. This timestamp is generally within a few hundred seconds of the current timestamp, although sometimes it can differ by more. ## Semantics
-The latest restorable timestamp for a container is the minimum timestamp upto which all its partitions has taken backup successfully in the given location. This Api calculates the latest restorable timestamp by retrieving the latest backup timestamp for each partition of the given container in given location and returns the minimum of all these timestamps. If the data for all its partitions is backed up and there was no new data written to those partitions, then it will return the maximum of current timestamp and the last data backup timestamp.
+The latest restorable timestamp for a container is the minimum timestamp upto, which all its partitions have taken backup successfully in the given location. This Api calculates the latest restorable timestamp by retrieving the latest backup timestamp for each partition of the given container in given location and returns the minimum of all these timestamps. If the data for all its partitions is backed up and there was no new data written to those partitions, then it will return the maximum of current timestamp and the last data backup timestamp.
-If a partition has not taken any backup yet but it has some data to be backed up, then it will return the minimum Unix (epoch) timestamp that is, Jan 1, 1970, midnight UTC (Coordinated Universal Time). In such cases, user must retry until it gives a timestamp greater than epoch timestamp.
+If a partition hasn't taken any backup yet but it has some data to be backed up, then it will return the minimum Unix (epoch) timestamp that is, January 1, 1970, midnight UTC (Coordinated Universal Time). In such cases, user must retry until it gives a timestamp greater than epoch timestamp.
## Latest restorable timestamp calculation
-The following example describes the expected outcome of latest restorable timestamp Api in different scenarios. In each scenario, we will discuss about the current log backup state of partition, pending data to be backed up and how it affects the overall latest restorable timestamp calculation for a container.
+The following example describes the expected outcome of latest restorable timestamp Api in different scenarios. In each scenario, we'll discuss about the current log backup state of partition, pending data to be backed up and how it affects the overall latest restorable timestamp calculation for a container.
-Let's say, we have an account which exists in 2 regions (East US and West US). We have a container "cont1" which has 2 partitions (Partition1 and Partition2). If we send a request to get the latest restorable timestamp for this container at timestamp 't3', the overall latest restorable timestamp for this container will be calculated as follows:
+Let's say, we have an account, which exists in two regions (East US and West US). We have a container "cont1", which has two partitions (Partition1 and Partition2). If we send a request to get the latest restorable timestamp for this container at timestamp 't3', the overall latest restorable timestamp for this container will be calculated as follows:
-##### Case1: Data for all the partitions has not been backed up yet
+##### Case1: Data for all the partitions hasn't been backed up yet
*East US Region:*
Let's say, we have an account which exists in 2 regions (East US and West US). W
* Partition 2: Last backup time = t3, and all its data is backed up. * Latest restorable timestamp = max (current timestamp, t3, t3)
-##### Case3: When one or more partitions has not taken any backup yet
+##### Case3: When one or more partitions hasn't taken any backup yet
*East US Region:*
Yes. This API can be used for account provisioned with continuous backup mode or
The log backup data is backed up every 100 seconds. However, in some exceptional cases, backups could be delayed for more than 100 seconds. #### Will restorable timestamp work for deleted accounts?
-No. It only applies only to live accounts. You can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time.
+No. It applies only to live accounts. You can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time.
## Next steps
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Title: Migrate an Azure Cosmos DB account from periodic to continuous backup mode
-description: Azure Cosmos DB currently supports a one-way migration from periodic to continuous mode and itΓÇÖs irreversible. After migrating from periodic to continuous mode, you can leverage the benefits of continuous mode.
+description: Azure Cosmos DB currently supports a one-way migration from periodic to continuous mode and itΓÇÖs irreversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
Last updated 04/08/2022 -+ # Migrate an Azure Cosmos DB account from periodic to continuous backup mode [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can leverage the benefits of continuous mode.
+Azure Cosmos DB accounts with periodic mode backup policy can be migrated to continuous mode using [Azure portal](#portal), [CLI](#cli), [PowerShell](#powershell), or [Resource Manager templates](#ARM-template). Migration from periodic to continuous mode is a one-way migration and itΓÇÖs not reversible. After migrating from periodic to continuous mode, you can apply the benefits of continuous mode.
The following are the key reasons to migrate into continuous mode: * The ability to do self-service restore using Azure portal, CLI, or PowerShell. * The ability to restore at time granularity of the second within the last 30-day window. * The ability to make sure that the backup is consistent across shards or partition key ranges within a period.
-* The ability to restore container, database, or the full account when it is deleted or modified.
+* The ability to restore container, database, or the full account when it's deleted or modified.
* The ability to choose the events on the container, database, or account and decide when to initiate the restore. > [!NOTE]
To perform the migration, you need `Microsoft.DocumentDB/databaseAccounts/write`
## Pricing after migration
-After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost is significantly cheaper than periodic mode. To learn more, see the [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) example.
+After you migrate your account to continuous backup mode, the cost with this mode is different when compared to the periodic backup mode. The continuous mode backup cost is cheaper than periodic mode. To learn more, see the [continuous backup mode pricing](continuous-backup-restore-introduction.md#continuous-backup-pricing) example.
## <a id="portal"></a> Migrate using portal
Use the following steps to migrate your account from periodic backup to continuo
:::image type="content" source="./media/migrate-continuous-backup/enable-backup-migration.png" alt-text="Migrate to continuous mode using Azure portal" lightbox="./media/migrate-continuous-backup/enable-backup-migration.png":::
-1. When the migration is in progress, the status shows **Pending.** After the itΓÇÖs complete, the status changes to **On.** Migration time depends on the size of data in your account.
+1. When the migration is in progress, the status shows **Pending.** After itΓÇÖs complete, the status changes to **On.** Migration time depends on the size of data in your account.
:::image type="content" source="./media/migrate-continuous-backup/migration-status.png" alt-text="Check the status of migration from Azure portal" lightbox="./media/migrate-continuous-backup/migration-status.png":::
Install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps
* If you already have Azure CLI installed, use `az upgrade` command to upgrade to the latest version. * Alternatively, user can also use Cloud Shell from Azure portal.
-1. Log in to your Azure account and run the following command to migrate your account to continuous mode:
+1. Sign in to your Azure account and run the following command to migrate your account to continuous mode:
```azurecli-interactive az login
az group deployment create -g <ResourceGroup> --template-file <ProvisionTemplate
## What to expect during and after migration?
-When migrating from periodic mode to continuous mode, you cannot run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
+When migrating from periodic mode to continuous mode, you can't run any control plane operations that performs account level updates or deletes. For example, operations such as adding or removing regions, account failover, updating backup policy etc. can't be run while the migration is in progress. The time for migration depends on the size of data and the number of regions in your account. Restore action on the migrated accounts only succeeds from the time when migration successfully completes.
You can restore your account after the migration completes. If the migration completes at 1:00 PM PST, you can do point in time restore starting from 1:00 PM PST.
You can restore your account after the migration completes. If the migration com
Yes. #### Which accounts can be targeted for backup migration?
-Currently, SQL API and API for MongoDB accounts with single write region, that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
+Currently, SQL API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Table API and Gremlin API are in preview.
-Accounts enabled with analytical storage and multiple-write regions are not supported for migration.
+Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
#### Does the migration take time? What is the typical time?
-Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with 10s of terabytes of data, the migration can take up to few days to complete.
+Migration takes time and it depends on the size of data and the number of regions in your account. You can get the migration status using Azure CLI or PowerShell commands. For large accounts with tens of terabytes of data, the migration can take up to few days to complete.
#### Does the migration cause any availability impact/downtime?
-No, the migration operation takes place in the background, so the client requests are not impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
+No, the migration operation takes place in the background, so the client requests aren't impacted. However, we need to perform some backend operations during the migration, and it might take extra time if the account is under heavy load.
#### What happens if the migration fails? Will I still get the periodic backups or get the continuous backups? Once the migration process is started, the account will start to become a continuous mode. If the migration fails, you must initiate migration again until it succeeds.
To restore to a time before t1, you can open a support ticket like you normally
#### Which account level control plane operations are blocked during migration? Operations such as add/remove region, failover, changing backup policy, throughput changes resulting in data movement are blocked during migration.
-#### If the migration fails for some underlying issue, would it still block the control plane operation until it is retried and completed successfully?
-Failed migration will not block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
+#### If the migration fails for some underlying issue, would it still block the control plane operation until it's retried and completed successfully?
+Failed migration won't block any control plane operations. If migration fails, itΓÇÖs recommended to retry until it succeeds before performing any other control plane operations.
#### Is it possible to cancel the migration?
-It is not possible to cancel the migration because it is not a reversible operation.
+It isn't possible to cancel the migration because it isn't a reversible operation.
#### Is there a tool that can help estimate migration time based on the data usage and number of regions? There isn't a tool to estimate time. But our scale runs indicate single region with 1 TB of data takes roughly one and half hour.
To learn more about continuous backup mode, see the following articles:
* Restore an account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). Trying to do capacity planning for a migration to Azure Cosmos DB?
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB API for MongoDB description: Azure CLI Samples for Azure Cosmos DB API for MongoDB-+ Last updated 02/21/2022-++
cosmos-db Connect Mongodb Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-mongodb-account.md
Last updated 08/26/2021-+ adobe-target: true adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021 adobe-target-experience: Experience B
cosmos-db Consistency Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/consistency-mapping.md
Last updated 10/12/2020-+
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/manage-with-bicep.md
Title: Create and manage MongoDB API for Azure Cosmos DB with Bicep description: Use Bicep to create and configure MongoDB API Azure Cosmos DB API.-+ Last updated 05/23/2022-++ # Manage Azure Cosmos DB MongoDB API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB API for MongoDB description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB API for MongoDB-+ Last updated 08/26/2021-++ # Azure PowerShell samples for Azure Cosmos DB API for MongoDB
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/resource-manager-template-samples.md
Title: Resource Manager templates for Azure Cosmos DB API for MongoDB description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB API for MongoDB. -+ Last updated 05/23/2022-++ # Manage Azure Cosmos DB MongoDB API resources using Azure Resource Manager templates
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/troubleshoot-query-performance.md
Last updated 08/26/2021 -+ # Troubleshoot query issues when using the Azure Cosmos DB API for MongoDB
cosmos-db Tutorial Develop Mongodb React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-mongodb-react.md
ms.devlang: javascript
Last updated 08/26/2021 -+ # Create a MongoDB app with React and Azure Cosmos DB
cosmos-db Tutorial Develop Nodejs Part 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-1.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-2.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Node.js Express app [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-3.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Build the UI with Angular [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Create a Cosmos account
cosmos-db Tutorial Develop Nodejs Part 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-5.md
Last updated 08/26/2021 -+ #Customer intent: As a developer, I want to build a Node.js application, so that I can manage the data stored in Cosmos DB.
cosmos-db Tutorial Develop Nodejs Part 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-6.md
Last updated 08/26/2021 -+ # Create an Angular app with Azure Cosmos DB's API for MongoDB - Add CRUD functions to the app [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Tutorial Global Distribution Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-global-distribution-mongodb.md
Last updated 08/26/2021-+ ms.devlang: csharp
cosmos-db Tutorial Query Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-query-mongodb.md
Last updated 12/03/2019-+ # Query data by using Azure Cosmos DB's API for MongoDB
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/online-backup-and-restore.md
Last updated 11/15/2021 -+
cosmos-db Optimize Cost Reads Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-reads-writes.md
Title: Optimizing the cost of your requests in Azure Cosmos DB description: This article explains how to optimize costs when issuing requests on Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-regions.md
Title: Optimize cost for multi-region deployments in Azure Cosmos DB description: This article explains how to manage costs of multi-region deployments in Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-storage.md
Title: Optimize storage cost in Azure Cosmos DB description: This article explains how to manage storage costs for the data stored in Azure Cosmos DB--+++ Last updated 08/26/2021
cosmos-db Optimize Cost Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-throughput.md
Title: Optimizing throughput cost in Azure Cosmos DB description: This article explains how to optimize throughput costs for the data stored in Azure Cosmos DB.--+++ Last updated 08/26/2021
cosmos-db Optimize Dev Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-dev-test.md
Title: Optimizing for development and testing in Azure Cosmos DB description: This article explains how Azure Cosmos DB offers multiple options for development and testing of the service for free.--+++ Last updated 08/26/2021
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Last updated 03/24/2022 -+ # Partitioning and horizontal scaling in Azure Cosmos DB
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB
description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 05/11/2022 --+++
cosmos-db Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy.md
Title: Use Azure Policy to implement governance and controls for Azure Cosmos DB resources description: Learn how to use Azure Policy to implement governance and controls for Azure Cosmos DB resources.--+++ Last updated 09/23/2020
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
Last updated 04/18/2022 -+ ms.devlang: azurecli
cosmos-db Relational Nosql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/relational-nosql.md
Last updated 12/16/2019-+ adobe-target: true
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Title: Request Units as a throughput and performance currency in Azure Cosmos DB description: Learn about how to specify and estimate Request Unit requirements in Azure Cosmos DB--+++ Last updated 03/24/2022 - # Request Units in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md
Title: Prevent Azure Cosmos DB resources from being deleted or changed description: Use Azure Resource Locks to prevent Azure Cosmos DB resources from being deleted or changed. -+ Last updated 05/13/2021-++ ms.devlang: azurecli
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Last updated 04/18/2022 -+
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
Last updated 08/20/2021 -+ # Best practices for scaling provisioned throughput (RU/s)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Title: Azure Cosmos DB Cassandra API keyspace and table with autoscale description: Use Azure CLI to create an Azure Cosmos DB Cassandra API account, keyspace, and table with autoscale.--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
Title: Create a Cassandra keyspace and table for Azure Cosmos DB description: Create a Cassandra keyspace and table for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
Title: Create resource lock for a Cassandra keyspace and table for Azure Cosmos DB description: Create resource lock for a Cassandra keyspace and table for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
Title: Create a Cassandra serverless account, keyspace and table for Azure Cosmos DB description: Create a Cassandra serverless account, keyspace and table for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources--+++
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
Title: Create an Azure Cosmos account with IP firewall description: Create an Azure Cosmos account with IP firewall--+++ Last updated 02/21/2022
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
Title: Work with account keys and connection strings for an Azure Cosmos account description: Work with account keys and connection strings for an Azure Cosmos account--+++ Last updated 02/21/2022
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
Title: Add regions, change failover priority, trigger failover for an Azure Cosmos account description: Add regions, change failover priority, trigger failover for an Azure Cosmos account--+++ Last updated 02/21/2022
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
Title: Connect an existing Azure Cosmos account with virtual network service endpoints description: Connect an existing Azure Cosmos account with virtual network service endpoints--+++ Last updated 02/21/2022
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
Title: Create an Azure Cosmos account with virtual network service endpoints description: Create an Azure Cosmos account with virtual network service endpoints--+++ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Title: Azure Cosmos DB Gremlin database and graph with autoscale description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin API account, database, and graph with autoscale.--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
Title: Create a Gremlin database and graph for Azure Cosmos DB description: Create a Gremlin database and graph for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
Title: Create resource lock for a Gremlin database and graph for Azure Cosmos DB description: Create resource lock for a Gremlin database and graph for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Title: Azure Cosmos DB Gremlin serverless account, database, and graph description: Use this Azure CLI script to create an Azure Cosmos DB Gremlin serverless account, database, and graph.--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
Title: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB description: Create a database with autoscale and shared collections for MongoDB API for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
Title: Create a database and collection for MongoDB API for Azure Cosmos DB description: Create a database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
Title: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB description: Create resource lock for a database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
Title: Create a serverless database and collection for MongoDB API for Azure Cosmos DB description: Create a serverless database and collection for MongoDB API for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB API for MongoDB resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
Title: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB description: Create a Core (SQL) API database and container with autoscale for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
Title: Create a Core (SQL) API database and container for Azure Cosmos DB description: Create a Core (SQL) API database and container for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
Title: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container description: Create resource lock for a Azure Cosmos DB Core (SQL) API database and container--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
Title: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB description: Create a Core (SQL) API serverless account, database and container for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Core (SQL) API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Title: Create a Table API table with autoscale for Azure Cosmos DB description: Create a Table API table with autoscale for Azure Cosmos DB--+++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
Title: Create a Table API table for Azure Cosmos DB description: Create a Table API table for Azure Cosmos DB--+++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Title: Create resource lock for a Azure Cosmos DB Table API table description: Create resource lock for a Azure Cosmos DB Table API table--+++
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Title: Create a Table API serverless account and table for Azure Cosmos DB description: Create a Table API serverless account and table for Azure Cosmos DB--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
Title: Perform throughput (RU/s) operations for Azure Cosmos DB Table API resources description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos DB Table API resources--+++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table with autoscale description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/create.md
Title: PowerShell script to create Azure Cosmos DB Cassandra API keyspace and table description: Azure PowerShell script - Azure Cosmos DB create Cassandra API keyspace and table-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Cassandra API resources description: Azure PowerShell script - Azure Cosmos DB list and get operations for Cassandra API-+ Last updated 03/18/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Cassandra API keyspace and table description: Create resource lock for Azure Cosmos Cassandra API keyspace and table--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Cassandra API resources-+ Last updated 10/07/2020-++
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
Title: PowerShell script to update the default consistency level on an Azure Cosmos account description: Azure PowerShell script sample - Update default consistency level on an Azure Cosmos DB account using PowerShell-+ Last updated 03/21/2020-++
cosmos-db Failover Priority Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
Title: PowerShell script to change failover priority for an Azure Cosmos account with single write region description: Azure PowerShell script sample - Change failover priority or trigger failover for an Azure Cosmos account with single write region-+ Last updated 03/18/2020-++
cosmos-db Firewall Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/firewall-create.md
Title: PowerShell script to create an Azure Cosmos DB account with IP Firewall description: Azure PowerShell script sample - Create an Azure Cosmos DB account with IP Firewall-+ Last updated 03/18/2020-++
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
Title: PowerShell script to get key and connection string operations for an Azure Cosmos DB account description: Azure PowerShell script sample - Account key and connection string operations for an Azure Cosmos DB account-+ Last updated 03/18/2020-++
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
Title: PowerShell script to update regions for an Azure Cosmos DB account description: Run this Azure PowerShell script to add regions or change region failover order for an Azure Cosmos DB account.-+ Last updated 05/02/2022-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph with autoscale description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/create.md
Title: PowerShell script to create Azure Cosmos DB Gremlin API database and graph description: Azure PowerShell script - Azure Cosmos DB create Gremlin API database and graph-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
Title: PowerShell script to list or get Azure Cosmos DB Gremlin API databases and graphs description: Run this Azure PowerShell script to list all or get specific Azure Cosmos DB Gremlin API databases and graphs.-+ Last updated 05/02/2022-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Gremlin API database and graph description: Create resource lock for Azure Cosmos Gremlin API database and graph--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB Gremlin API description: PowerShell scripts for throughput (RU/s) operations for Gremlin API-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection with autoscale description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/create.md
Title: PowerShell script to create Azure Cosmos MongoDB API database and collection description: Azure PowerShell script - create Azure Cosmos MongoDB API database and collection-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
Title: PowerShell script to list and get operations in Azure Cosmos DB's API for MongoDB description: Azure PowerShell script - Azure Cosmos DB list and get operations for MongoDB API-+ Last updated 05/01/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos MongoDB API database and collection description: Create resource lock for Azure Cosmos MongoDB API database and collection--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DBs API for MongoDB-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/autoscale.md
Title: PowerShell script to create Azure Cosmos DB SQL API database and container with autoscale description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container with autoscale-+ Last updated 07/30/2020-++
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-index-none.md
Title: PowerShell script to create a container with indexing turned off in an Azure Cosmos DB account description: Azure PowerShell script sample - Create a container with indexing turned off in an Azure Cosmos DB account-+ Last updated 05/13/2020-++
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create-large-partition-key.md
Title: PowerShell script to create an Azure Cosmos DB container with a large partition key description: Azure PowerShell script sample - Create a container with a large partition key in an Azure Cosmos DB account-+ Last updated 05/13/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/create.md
Title: PowerShell script to create Azure Cosmos DB SQL API database and container description: Azure PowerShell script - Azure Cosmos DB create SQL API database and container-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB SQL API resources description: Azure PowerShell script - Azure Cosmos DB list and get operations for SQL API-+ Last updated 03/17/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos SQL API database and container description: Create resource lock for Azure Cosmos SQL API database and container--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/sql/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB SQL API database or container-+ Last updated 10/07/2020-++
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Title: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API description: PowerShell script to create a table with autoscale in Azure Cosmos DB Table API-+ Last updated 07/30/2020-++
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Title: PowerShell script to create a table in Azure Cosmos DB Table API description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB Table API-+ Last updated 05/13/2020-++
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB Table API operations description: Azure PowerShell script - Azure Cosmos DB list and get operations for Table API-+ Last updated 07/31/2020-++
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
Title: PowerShell script to create resource lock for Azure Cosmos Table API table description: Create resource lock for Azure Cosmos Table API table--+++
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API description: PowerShell scripts for throughput (RU/s) operations for for Azure Cosmos DB Table API-+ Last updated 10/07/2020-++
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB
description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 05/10/2022 --+++
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
Title: Provision throughput on Azure Cosmos containers and databases description: Learn how to set provisioned throughput for your Azure Cosmos containers and databases.--+++ Last updated 09/16/2021
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
Last updated 04/01/2022 -+
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
> [!VIDEO https://aka.ms/docs.dotnet-best-practices] ## Checklist
-|Checked | Topic |Details/Links |
+|Checked | Subject |Details/Links |
|||| |<input type="checkbox"/> | SDK Version | Always using the [latest version](sql-api-sdk-dotnet-standard.md) of the Cosmos DB SDK available for optimal performance. | | <input type="checkbox"/> | Singleton Client | Use a [single instance](/dotnet/api/microsoft.azure.cosmos.cosmosclient?view=azure-dotnet&preserve-view=true) of `CosmosClient` for the lifetime of your application for [better performance](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). |
-| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
-| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
-| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is very high. |
+| <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution-sql-api.md) |
+| <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution-sql-api.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). |
+| <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |
| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. | | <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sql-sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3-sql.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).| |<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. | |<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. | | <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. |
-| <input type="checkbox"/> | Enabling Query Metrics | For additional logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture additional diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (i.e. if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds ). It is advised to only use these diagnostics during performance testing. |
-| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you are using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
+| <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
+| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture extra diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (that is, if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds). It's advised to only use these diagnostics during performance testing. |
+| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
## Best practices when using Gateway mode Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value.
For a sample application that's used to evaluate Azure Cosmos DB for high-perfor
To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md). Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Bicep Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bicep-samples.md
Title: Bicep samples for Azure Cosmos DB Core (SQL API) description: Use Bicep to create and configure Azure Cosmos DB. -+ Last updated 09/13/2021-++ # Bicep for Azure Cosmos DB
cosmos-db Bulk Executor Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-dot-net.md
ms.devlang: csharp Last updated 05/02/2020-+
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
ms.devlang: java Last updated 03/07/2022-+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB | Microsoft Docs description: This article lists several Azure CLI code samples available for interacting with Azure Cosmos DB. View API-specific CLI samples.-+ Last updated 02/21/2022-++ keywords: cosmos db, azure cli samples, azure cli code samples, azure cli script samples
cosmos-db Create Notebook Visualize Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-notebook-visualize-data.md
Last updated 11/05/2019 -+ # Tutorial: Create a notebook in Azure Cosmos DB to analyze and visualize the data
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
Title: Quickstart - Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API description: This quickstart presents a code sample for the Azure Cosmos DB Spark 3 OLTP Connector for SQL API that you can use to connect to and query data in your Azure Cosmos DB account-+ ms.devlang: java Last updated 03/01/2022-++
cosmos-db Create Support Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-support-request-quota-increase.md
Title: How to request quota increase for Azure Cosmos DB resources
description: Learn how to request a quota increase for Azure Cosmos DB resources. You will also learn how to enable a subscription to access a region. -+ Last updated 04/27/2022
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-website.md
Title: Deploy a web app with a template - Azure Cosmos DB description: Learn how to deploy an Azure Cosmos account, Azure App Service Web Apps, and a sample web application using an Azure Resource Manager template.-+ Last updated 06/19/2020-++ # Deploy Azure Cosmos DB and Azure App Service with a web app from GitHub using an Azure Resource Manager Template
cosmos-db Database Transactions Optimistic Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/database-transactions-optimistic-concurrency.md
Title: Database transactions and optimistic concurrency control in Azure Cosmos DB description: This article describes database transactions and optimistic concurrency control in Azure Cosmos DB--+++ Last updated 12/04/2019- # Transactions and optimistic concurrency control
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-container.md
Title: Create a container in Azure Cosmos DB SQL API description: Learn how to create a container in Azure Cosmos DB SQL API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+ Last updated 01/03/2022-++ ms.devlang: csharp
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
Title: Manage consistency in Azure Cosmos DB description: Learn how to configure and manage consistency levels in Azure Cosmos DB using Azure portal, .NET SDK, Java SDK and various other SDKs-+ Last updated 02/16/2022-++ ms.devlang: csharp, java, javascript
cosmos-db How To Multi Master https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-multi-master.md
Title: How to configure multi-region writes in Azure Cosmos DB description: Learn how to configure multi-region writes for your applications by using different SDKs in Azure Cosmos DB.-+ Last updated 01/06/2021-++
cosmos-db How To Provision Container Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-container-throughput.md
Title: Provision container throughput in Azure Cosmos DB SQL API description: Learn how to provision throughput at the container level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs. -+ Last updated 10/14/2020-++
cosmos-db How To Provision Database Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-provision-database-throughput.md
Title: Provision database throughput in Azure Cosmos DB SQL API description: Learn how to provision throughput at the database level in Azure Cosmos DB SQL API using Azure portal, CLI, PowerShell and various other SDKs. -+ Last updated 10/15/2020-++
cosmos-db How To Query Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-query-container.md
Title: Query containers in Azure Cosmos DB description: Learn how to query containers in Azure Cosmos DB using in-partition and cross-partition queries-+ Last updated 3/18/2019-++ # Query an Azure Cosmos container
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB with Bicep description: Use Bicep to create and configure Azure Cosmos DB for Core (SQL) API -+ Last updated 02/18/2022-++ # Manage Azure Cosmos DB Core (SQL) API resources with Bicep
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-cli.md
Title: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI description: Manage Azure Cosmos DB Core (SQL) API resources using Azure CLI. -+ Last updated 02/18/2022-++ # Manage Azure Cosmos Core (SQL) API resources using Azure CLI
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-powershell.md
Title: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell description: Manage Azure Cosmos DB Core (SQL) API resources using using PowerShell. -+ Last updated 02/18/2022-++
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/manage-with-templates.md
Title: Create and manage Azure Cosmos DB with Resource Manager templates description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Core (SQL) API -+ Last updated 02/18/2022-++ # Manage Azure Cosmos DB Core (SQL) API resources with Azure Resource Manager templates
cosmos-db Migrate Containers Partitioned To Nonpartitioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-containers-partitioned-to-nonpartitioned.md
Title: Migrate non-partitioned Azure Cosmos containers to partitioned containers description: Learn how to migrate all the existing non-partitioned containers into partitioned containers.-+ Last updated 08/26/2021-++
cosmos-db Migrate Data Striim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-data-striim.md
Last updated 12/09/2021-+ # Migrate data to Azure Cosmos DB SQL API account using Striim
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-hbase-to-cosmos-db.md
Last updated 12/07/2021 -+ # Migrate data from Apache HBase to Azure Cosmos DB SQL API account
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Title: Modeling data in Azure Cosmos DB description: Learn about data modeling in NoSQL databases, differences between modeling data in a relational database and a document database.--+++ Last updated 03/24/2022 - # Data modeling in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
For comparison, let's first see how we might model data in a relational database
:::image type="content" source="./media/sql-api-modeling-data/relational-data-model.png" alt-text="Relational database model" border="false":::
-When working with relational databases, the strategy is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, as well as multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
+The strategy, when working with relational databases, is to normalize all your data. Normalizing your data typically involves taking an entity, such as a person, and breaking it down into discrete components. In the example above, a person may have multiple contact detail records, and multiple address records. Contact details can be further broken down by further extracting common fields like a type. The same applies to address, each record can be of type *Home* or *Business*.
The guiding premise when normalizing data is to **avoid storing redundant data** on each record and rather refer to data. In this example, to read a person, with all their contact details and addresses, you need to use JOINS to effectively compose back (or denormalize) your data at run time.
JOIN ContactDetailType cdt ON cdt.Id = cd.TypeId
JOIN Address a ON a.PersonId = p.Id ```
-Updating a single person with their contact details and addresses requires write operations across many individual tables.
+Write operations across many individual tables are required to update a single person's contact details and addresses.
Now let's take a look at how we would model the same data as a self-contained entity in Azure Cosmos DB.
Now let's take a look at how we would model the same data as a self-contained en
Using the approach above we've **denormalized** the person record, by **embedding** all the information related to this person, such as their contact details and addresses, into a *single JSON* document. In addition, because we're not confined to a fixed schema we have the flexibility to do things like having contact details of different shapes entirely.
-Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating a person record, with their contact details and addresses, is also a **single write operation** against a single item.
+Retrieving a complete person record from the database is now a **single read operation** against a single container and for a single item. Updating the contact details and addresses of a person record is also a **single write operation** against a single item.
By denormalizing data, your application may need to issue fewer queries and updates to complete common operations.
In general, use embedded data models when:
* There are **contained** relationships between entities. * There are **one-to-few** relationships between entities. * There's embedded data that **changes infrequently**.
-* There's embedded data that will not grow **without bound**.
+* There's embedded data that won't grow **without bound**.
* There's embedded data that is **queried frequently together**. > [!NOTE]
Take this JSON snippet.
This might be what a post entity with embedded comments would look like if we were modeling a typical blog, or CMS, system. The problem with this example is that the comments array is **unbounded**, meaning that there's no (practical) limit to the number of comments any single post can have. This may become a problem as the size of the item could grow infinitely large so is a design you should avoid.
-As the size of the item grows the ability to transmit the data over the wire as well as reading and updating the item, at scale, will be impacted.
+As the size of the item grows the ability to transmit the data over the wire and reading and updating the item, at scale, will be impacted.
In this case, it would be better to consider the following data model.
Comment items:
] ```
-This model has a document for each comment with a property that contains the post id. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
-than the most recent comments would query this container passing the postId which should be the partition key for the comments container.
+This model has a document for each comment with a property that contains the post identifier. This allows posts to contain any number of comments and can grow efficiently. Users wanting to see more
+than the most recent comments would query this container passing the postId, which should be the partition key for the comments container.
Another case where embedding data isn't a good idea is when the embedded data is used often across items and will change frequently.
Take this JSON snippet.
"holdings": [ { "numberHeld": 100,
- "stock": { "symbol": "zaza", "open": 1, "high": 2, "low": 0.5 }
+ "stock": { "symbol": "zbzb", "open": 1, "high": 2, "low": 0.5 }
}, { "numberHeld": 50,
Take this JSON snippet.
This could represent a person's stock portfolio. We have chosen to embed the stock information into each portfolio document. In an environment where related data is changing frequently, like a stock trading application, embedding data that changes frequently is going to mean that you're constantly updating each portfolio document every time a stock is traded.
-Stock *zaza* may be traded many hundreds of times in a single day and thousands of users could have *zaza* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
+Stock *zbzb* may be traded many hundreds of times in a single day and thousands of users could have *zbzb* on their portfolio. With a data model like the above we would have to update many thousands of portfolio documents many times every day leading to a system that won't scale well.
## <a id="referencing-data"></a>Reference data
Person document:
Stock documents: { "id": "1",
- "symbol": "zaza",
+ "symbol": "zbzb",
"open": 1, "high": 2, "low": 0.5,
An immediate downside to this approach though is if your application is required
### What about foreign keys?
-Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or through the use of server-side triggers or stored procedures on Azure Cosmos DB.
+Because there's currently no concept of a constraint, foreign-key or otherwise, any inter-document relationships that you have in documents are effectively "weak links" and won't be verified by the database itself. If you want to ensure that the data a document is referring to actually exists, then you need to do this in your application, or by using server-side triggers or stored procedures on Azure Cosmos DB.
### When to reference
Book documents:
{"id": "1000","name": "Deep Dive into Azure Cosmos DB", "pub-id": "mspress"} ```
-In the above example, we have dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
+In the above example, we've dropped the unbounded collection on the publisher document. Instead we just have a reference to the publisher on each book document.
-### How do I model many to many relationships?
+### How do I model many-to-many relationships?
-In a relational database *many:many* relationships are often modeled with join tables, which just join records from other tables together.
+In a relational database *many-to-many* relationships are often modeled with join tables, which just join records from other tables together.
:::image type="content" source="./media/sql-api-modeling-data/join-table.png" alt-text="Join tables" border="false":::
Here we've (mostly) followed the embedded model, where data from other entities
If you look at the book document, we can see a few interesting fields when we look at the array of authors. There's an `id` field that is the field we use to refer back to an author document, standard practice in a normalized model, but then we also have `name` and `thumbnailUrl`. We could have stuck with `id` and left the application to get any additional information it needed from the respective author document using the "link", but because our application displays the author's name and a thumbnail picture with every book displayed we can save a round trip to the server per book in a list by denormalizing **some** data from the author.
-Sure, if the author's name changed or they wanted to update their photo we'd have to go and update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
+Sure, if the author's name changed or they wanted to update their photo we'd have to update every book they ever published but for our application, based on the assumption that authors don't change their names often, this is an acceptable design decision.
In the example, there are **pre-calculated aggregates** values to save expensive processing on a read operation. In the example, some of the data embedded in the author document is data that is calculated at run-time. Every time a new book is published, a book document is created **and** the countOfBooks field is set to a calculated value based on the number of book documents that exist for a particular author. This optimization would be good in read heavy systems where we can afford to do computations on writes in order to optimize reads.
-The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures, that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
+The ability to have a model with pre-calculated fields is made possible because Azure Cosmos DB supports **multi-document transactions**. Many NoSQL stores can't do transactions across documents and therefore advocate design decisions, such as "always embed everything", due to this limitation. With Azure Cosmos DB, you can use server-side triggers, or stored procedures that insert books and update authors all within an ACID transaction. Now you don't **have** to embed everything into one document just to be sure that your data remains consistent.
## Distinguish between different document types
Review documents:
This integration happens through [Azure Cosmos DB analytical store](../analytical-store-introduction.md), a columnar representation of your transactional data that enables large-scale analytics without any impact to your transactional workloads. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads. When you create a container with analytical store enabled, or when you enable analytical store on an existing container, all transactional inserts, updates, and deletes are synchronized with analytical store in near real time, no Change Feed or ETL jobs are required.
-With Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (RUs) costs. Azure Synapse Analytics currently supports Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
+With Azure Synapse Link, you can now directly connect to your Azure Cosmos DB containers from Azure Synapse Analytics and access the analytical store, at no Request Units (request units) costs. Azure Synapse Analytics currently supports Azure Synapse Link with Synapse Apache Spark and serverless SQL pools. If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account.
### Analytical store automatic schema inference
Normalization becomes meaningless since with Azure Synapse Link you can join bet
* Fewer properties per document. * Data structures with fewer nested levels.
-Please note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
+Note that these last two factors, fewer properties and fewer levels, help in the performance of your analytical queries but also decrease the chances of parts of your data not being represented in the analytical store. As described in the article on automatic schema inference rules, there are limits to the number of levels and properties that are represented in analytical store.
Another important factor for normalization is that SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. In other words, both analytical store and Synapse SQL serverless pools have a limit of 1000 properties.
But what to do since denormalization is an important data modeling technique for
Your Azure Cosmos DB partition key (PK) isn't used in analytical store. And now you can use [analytical store custom partitioning](https://devblogs.microsoft.com/cosmosdb/custom-partitioning-azure-synapse-link/) to copies of analytical store using any PK that you want. Because of this isolation, you can choose a PK for your transactional data with focus on data ingestion and point reads, while cross-partition queries can be done with Azure Synapse Link. Let's see an example:
-In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in RUs to run. But with Azure Synapse Link, you can run these analytical queries at no RUs costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link leverages this characteristic to allow great performance with Azure Synapse Analytics runtimes.
+In a hypothetical global IoT scenario, `device id` is a good PK since all devices have a similar data volume and with that you won't have a hot partition problem. But if you want to analyze the data of more than one device, like "all data from yesterday" or "totals per city", you may have problems since those are cross-partition queries. Those queries can hurt your transactional performance since they use part of your throughput in request units to run. But with Azure Synapse Link, you can run these analytical queries at no request units costs. Analytical store columnar format is optimized for analytical queries and Azure Synapse Link applies this characteristic to allow great performance with Azure Synapse Analytics runtimes.
### Data types and properties names
Azure Synapse Link allows you to reduce costs from the following perspectives:
* Fewer queries running in your transactional database. * A PK optimized for data ingestion and point reads, reducing data footprint, hot partition scenarios, and partitions splits. * Data tiering since [analytical time-to-live (attl)](../analytical-store-introduction.md#analytical-ttl) is independent from transactional time-to-live (tttl). You can keep your transactional data in transactional store for a few days, weeks, months, and keep the data in analytical store for years or for ever. Analytical store columnar format brings a natural data compression, from 50% up to 90%. And its cost per GB is ~10% of transactional store actual price. For more information about the current backup limitations, see [analytical store overview](../analytical-store-introduction.md).
- * No ETL jobs running in your environment, meaning that you don't need to provision RUs for them.
+ * No ETL jobs running in your environment, meaning that you don't need to provision request units for them.
### Controlled redundancy
-This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can leverage [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for a Synapse Link friendly data model. Let's see an example:
+This is a great alternative for situations when a data model already exists and can't be changed. And the existing data model doesn't fit well into analytical store due to automatic schema inference rules like the limit of nested levels or the maximum number of properties. If this is your case, you can use [Azure Cosmos DB Change Feed](../change-feed.md) to replicate your data into another container, applying the required transformations for an Azure Synapse Link friendly data model. Let's see an example:
#### Scenario Container `CustomersOrdersAndItems` is used to store on-line orders including customer and items details: billing address, delivery address, delivery method, delivery status, items price, etc. Only the first 1000 properties are represented and key information isn't included in analytical store, blocking Azure Synapse Link usage. The container has PBs of records it's not possible to change the application and remodel the data.
-Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase RUs provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
+Another perspective of the problem is the big data volume. Billions of rows are constantly used by the Analytics Department, what prevents them to use tttl for old data deletion. Maintaining the entire data history in the transactional database because of analytical needs forces them to constantly increase request units provisioning, impacting costs. Transactional and analytical workloads compete for the same resources at the same time.
What to do? #### Solution with Change Feed
-* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they are normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
-* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the RUs usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
-* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another RUs usage reduction, since there's a minimum of 10 RUs per GB in Azure Cosmos DB. Less data, fewer RUs.
+* The engineering team decided to use Change Feed to populate three new containers: `Customers`, `Orders`, and `Items`. With Change Feed they're normalizing and flattening the data. Unnecessary information is removed from the data model and each container has close to 100 properties, avoiding data loss due to automatic schema inference limits.
+* These new containers have analytical store enabled and now the Analytics Department is using Synapse Analytics to read the data, reducing the request units usage since the analytical queries are happening in Synapse Apache Spark and serverless SQL pools.
+* Container `CustomersOrdersAndItems` now has tttl set to keep data for six months only, which allows for another request units usage reduction, since there's a minimum of 10 request units per GB in Azure Cosmos DB. Less data, fewer request units.
## Takeaways The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever.
-Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can leverage the flexibility of a schema-free database to embrace that change and evolve your data model easily.
+Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily.
## Next steps
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md
Title: Connect to Azure Cosmos DB using BI analytics tools description: Learn how to use the Azure Cosmos DB ODBC driver to create tables and views so that normalized data can be viewed in BI and data analytics software.--+++
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Core (SQL) API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Core (SQL) API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Core (SQL) API
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-template.md
Title: Quickstart - Create an Azure Cosmos DB and a container by using Azure Resource Manager template description: Quickstart showing how to an Azure Cosmos database and a container by using Azure Resource Manager template--+++ tags: azure-resource-manager
cosmos-db Scale On Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/scale-on-schedule.md
Title: Scale Azure Cosmos DB on a schedule by using Azure Functions timer description: Learn how to scale changes in throughput in Azure Cosmos DB using PowerShell and Azure Functions.-+ Last updated 01/13/2020-++ # Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
cosmos-db Serverless Computing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/serverless-computing-database.md
-+ Last updated 05/02/2020
cosmos-db Sql Query Scalar Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-scalar-expressions.md
Title: Scalar expressions in Azure Cosmos DB SQL queries description: Learn about the scalar expression SQL syntax for Azure Cosmos DB. This article also describes how to combine scalar expressions into complex expressions by using operators. -+ Last updated 05/17/2019-++ # Scalar expressions in Azure Cosmos DB SQL queries
cosmos-db Synthetic Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/synthetic-partition-keys.md
Last updated 08/26/2021--+++
cosmos-db Templates Samples Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/templates-samples-sql.md
Title: Azure Resource Manager templates for Azure Cosmos DB Core (SQL API) description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB. -+ Last updated 08/26/2021-++ # Azure Resource Manager templates for Azure Cosmos DB
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/time-to-live.md
Title: Expire data in Azure Cosmos DB with Time to Live description: With TTL, Microsoft Azure Cosmos DB provides the ability to have documents automatically purged from the system after a period of time.--+++ Last updated 09/16/2021-- # Time to Live (TTL) in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Troubleshoot Bad Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-bad-request.md
Last updated 03/07/2022 -+ # Diagnose and troubleshoot bad request exceptions in Azure Cosmos DB
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-changefeed-functions.md
Last updated 04/14/2022 -+ # Diagnose and troubleshoot issues when using Azure Functions trigger for Cosmos DB
cosmos-db Troubleshoot Dot Net Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-header-too-large.md
Last updated 09/29/2021 -+
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-request-timeout.md
Last updated 02/02/2022 -+
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Last updated 03/09/2022 -+ # Diagnose and troubleshoot slow requests in Azure Cosmos DB .NET SDK
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-forbidden.md
Last updated 04/14/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB forbidden exceptions
cosmos-db Troubleshoot Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-not-found.md
Last updated 05/26/2021 -+
cosmos-db Troubleshoot Request Rate Too Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-rate-too-large.md
Last updated 03/03/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
cosmos-db Troubleshoot Request Timeout Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout-java-sdk-v4-sql.md
Last updated 10/28/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK request timeout exceptions
cosmos-db Troubleshoot Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-request-timeout.md
Last updated 07/13/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB request timeout exceptions
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md
Last updated 03/28/2022
-+ # Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multiregional environments [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Troubleshoot Service Unavailable Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable-java-sdk-v4-sql.md
Last updated 02/03/2022 -+ # Diagnose and troubleshoot Azure Cosmos DB Java v4 SDK service unavailable exceptions
cosmos-db Troubleshoot Service Unavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-service-unavailable.md
Last updated 08/06/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB service unavailable exceptions
cosmos-db Troubleshoot Unauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-unauthorized.md
Last updated 07/13/2020 -+ # Diagnose and troubleshoot Azure Cosmos DB unauthorized exceptions
cosmos-db Tutorial Global Distribution Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-global-distribution-sql-api.md
Title: 'Tutorial: Azure Cosmos DB global distribution tutorial for the SQL API' description: 'Tutorial: Learn how to set up Azure Cosmos DB global distribution using the SQL API with .NET, Java, Python and various other SDKs'--+++ Last updated 04/03/2022- - # Tutorial: Set up Azure Cosmos DB global distribution using the SQL API [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Tutorial Query Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-query-sql-api.md
Title: 'Tutorial: How to query with SQL in Azure Cosmos DB?' description: 'Tutorial: Learn how to query with SQL queries in Azure Cosmos DB using the query playground'--+++ Last updated 08/26/2021- # Tutorial: Query Azure Cosmos DB by using the SQL API
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Last updated 03/25/2022-+ ms.devlang: csharp
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Last updated 07/12/2021-+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB Table API description: Azure CLI Samples for Azure Cosmos DB Table API-+ Last updated 02/21/2022-++
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-java.md
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
### [Azure CLI](#tab/azure-cli)
-Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az_cosmosdb_create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
+Cosmos DB accounts are created using the [az Cosmos DB create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
In the [Azure portal](https://portal.azure.com/), complete the following steps t
### [Azure CLI](#tab/azure-cli)
-Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az_cosmosdb_table_create) command.
+Tables in Cosmos DB are created using the [az Cosmos DB table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
```azurecli COSMOS_TABLE_NAME='WeatherData'
To access your table(s) in Cosmos DB, your app will need the table connection st
### [Azure CLI](#tab/azure-cli)
-To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az_cosmosdb_keys_list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
+To get the primary table storage connection string using Azure CLI, use the [az Cosmos DB keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
```azurecli # This gets the primary Table connection string
A resource group can be deleted using the [Azure portal](https://portal.azure.co
### [Azure CLI](#tab/azure-cli)
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az_group_delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
+To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
```azurecli az group delete --name $RESOURCE_GROUP_NAME
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
Title: Create a container in Azure Cosmos DB Table API description: Learn how to create a container in Azure Cosmos DB Table API by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+ Last updated 10/16/2020-++ # Create a container in Azure Cosmos DB Table API
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
ms.devlang: python
Last updated 03/23/2021 -+
cosmos-db How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-ruby.md
Last updated 07/23/2020 -+ # How to use Azure Table Storage and the Azure Cosmos DB Table API with Ruby [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB Table API with Bicep description: Use Bicep to create and configure Azure Cosmos DB Table API. -+ Last updated 09/13/2021-++ # Manage Azure Cosmos DB Table API resources using Bicep
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB Table API description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB Table API-+ Last updated 01/20/2021-++ # Azure PowerShell samples for Azure Cosmos DB Table API
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
Title: Resource Manager templates for Azure Cosmos DB Table API description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB Table API. -+ Last updated 05/19/2020-++ # Manage Azure Cosmos DB Table API resources using Azure Resource Manager templates
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-import.md
Title: Migrate existing data to a Table API account in Azure Cosmos DB description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.--+++
cosmos-db Table Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-support.md
Last updated 11/03/2021 -+ ms.devlang: cpp, csharp, java, javascript, php, python, ruby
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution-table.md
Last updated 01/30/2020-+ # Set up Azure Cosmos DB global distribution using the Table API [!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
cosmos-db Tutorial Query Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query-table.md
Last updated 06/05/2020-+ ms.devlang: csharp
cosmos-db Total Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/total-cost-ownership.md
Title: Total Cost of Ownership (TCO) with Azure Cosmos DB description: This article compares the total cost of ownership of Azure Cosmos DB with IaaS and on-premises databases--+++ Last updated 08/26/2021- # Total Cost of Ownership (TCO) with Azure Cosmos DB
cosmos-db Tutorial Setup Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/tutorial-setup-ci-cd.md
Last updated 01/28/2020 -+ # Set up a CI/CD pipeline with the Azure Cosmos DB Emulator build task in Azure DevOps
cosmos-db Understand Your Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/understand-your-bill.md
Title: Understanding your Azure Cosmos DB bill description: This article explains how to understand your Azure Cosmos DB bill with some examples.--+++ Last updated 03/31/2022- # Understand your Azure Cosmos DB bill
cosmos-db Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/unique-keys.md
Title: Use unique keys in Azure Cosmos DB description: Learn how to define and use unique keys for an Azure Cosmos database. This article also describes how unique keys add a layer of data integrity.--+++ Last updated 08/26/2021- # Unique key constraints in Azure Cosmos DB
cosmos-db Update Backup Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/update-backup-storage-redundancy.md
Last updated 12/03/2021 -+
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
Title: Common use cases and scenarios for Azure Cosmos DB description: 'Learn about the top five use cases for Azure Cosmos DB: user generated content, event logging, catalog data, user preferences data, and Internet of Things (IoT).' --+++ Last updated 05/21/2019
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md
Title: Monitor and debug with insights in Azure Cosmos DB
description: Use metrics in Azure Cosmos DB to debug common issues and monitor the database. -+
cosmos-db Visualize Qlik Sense https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/visualize-qlik-sense.md
Last updated 05/23/2019-+ # Connect Qlik Sense to Azure Cosmos DB and visualize your data
cosmos-db Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/whitepapers.md
Title: Whitepapers that describe Azure Cosmos DB concepts
description: Get the list of whitepapers for Azure Cosmos DB, these whitepapers describe the concepts in depth. --+++ Last updated 05/07/2021
cost-management-billing Understand Azure Data Explorer Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-data-explorer-reservation-charges.md
Title: Understand how the reservation discount is applied to Azure Data Explorer
+ Title: Reservation discount for Azure Data Explorer
description: Learn how the reservation discount is applied to Azure Data Explorer markup meter. Previously updated : 09/15/2021 Last updated : 05/31/2022+
-# Understand how the reservation discount is applied to Azure Data Explorer
+# How the reservation discount is applied to Azure Data Explorer
After you buy an Azure Data Explorer reserved capacity, the reservation discount is automatically applied to Azure Data Explorer resources that match the attributes and quantity of the reservation. A reservation includes the Azure Data Explorer markup charges. It doesn't include compute, networking, storage, or any other Azure resource used to operate Azure Data Explorer cluster. Reservations for these resources should be bought separately.
-## How reservation discount is applied
+## Reservation discount usage
-A reservation discount is on a "*use-it-or-lose-it*" basis. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward unused reserved hours.
+A reservation discount is on a "*use-it-or-lose-it*" basis. So, if you don't have matching resources for any hour, then you lose a reservation quantity for that hour. You can't carry forward discounts for unused reserved hours.
When you shut down a resource, the reservation discount automatically applies to another matching resource in the specified scope. If no matching resources are found in the specified scope, then the reserved hours are *lost*.
-## Reservation discount applied to Azure Data Explorer clusters
+## Discount for other resources
A reservation discount is applied to Azure Data Explorer markup consumption on an hour-by-hour basis. For Azure Data Explorer resources that don't run the full hour, the reservation discount is automatically applied to other Data Explorer resources that match the reservation attributes. The discount can apply to Azure Data Explorer resources that are running concurrently. If you don't have Azure Data Explorer resources that run for the full hour and that match the reservation attributes, you don't get the full benefit of the reservation discount for that hour.
If you have questions or need help, [create a support request](https://go.micros
To learn more about Azure reservations, see the following articles: * [Prepay for Azure Data Explorer compute resources with Azure Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity)
-* [What are reservations for Azure](save-compute-costs-reservations.md)
+* [What are reservations for Azure?](save-compute-costs-reservations.md)
* [Manage Azure reservations](manage-reserved-vm-instance.md)
-* [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+* [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
* [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md) * [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations)
databox-online Azure Stack Edge Gpu 2205 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2205-release-notes.md
+
+ Title: Azure Stack Edge 2205 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2205 release.
++
+
+++ Last updated : 06/06/2022+++
+# Azure Stack Edge 2205 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2205 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2205** release, which maps to software version number **2.2.1981.5086**. This software can be applied to your device if you're running at least Azure Stack Edge 2106 (2.2.1636.3457) software.
+
+## What's new
+
+The 2205 release has the following features and enhancements:
+
+- **Kubernetes changes** - Beginning this release, compute enablement is moved to a dedicated Kubernetes page in the local UI.
+- **Generation 2 virtual machines** - Starting this release, Generation 2 virtual machines can be deployed on Azure Stack Edge. For more information, see [Supported VM sizes and types](azure-stack-edge-gpu-virtual-machine-overview.md#operating-system-disks-and-images).
+- **GPU extension update** - In this release, the GPU extension packages are updated. These updates will fix some issues that were encountered in a previous release during the installation of the extension. For more information, see how to [Update GPU extension of your Azure Stack Edge](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
+- **No IP option** - Going forward, there's an option to not set an IP for a network interface on your Azure Stack Edge device. For more information, see [Configure network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-network).
++
+## Issues fixed in 2205 release
+
+The following table lists the issues that were release noted in previous releases and fixed in the current release.
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|GPU Extension installation | In the previous releases, there were issues that caused the GPU extension installation to fail. These issues are described in [Troubleshooting GPU extension issues](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md). These are fixed in the 2205 release and both the Windows and Linux installation packages are updated. More information on 2205 specific installation changes is covered in [Install GPU extension on your Azure Stack Edge device](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md). |
+
+## Known issues in 2205 release
+
+The following table provides a summary of known issues in this release.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. |
+|**2.**|HPN VMs |For this release, the Standard_F12_HPN can only support one network interface and can't be used for Multi-Access Edge Computing (MEC) deployments. | |
++
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> - In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> - Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> - Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> - Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.</li></ul> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> - Connect to the Windows VM using remote desktop protocol (RDP). <br> - Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> - If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> - While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> - After you kill the process, the process starts running again with the newer version. <br> - Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> - [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|GPU VMs |Prior to this release, GPU VM lifecycle wasn't managed in the update flow. Hence, when updating to 2103 release, GPU VMs aren't stopped automatically during the update. You'll need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**25.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**26.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | This functionality may be available in a future release. |
++
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 07/16/2021 Last updated : 05/19/2022 #Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
Do the following steps to create a Windows VM image:
1. Create a Windows virtual machine in Azure. For portal instructions, see [Create a Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md). For PowerShell instructions, see [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md).
- The virtual machine must be a Generation 1 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+ The virtual machine can be a Generation 1 or Generation 2 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
- You can use any Windows Gen1 VM with a fixed-size VHD in Azure Marketplace. For a list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
+ You can use any Windows Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace. For a list Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
2. Generalize the virtual machine. To generalize the VM, [connect to the virtual machine](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-windows-vm), open a command prompt, and run the following `sysprep` command:
Do the following steps to create a Linux VM image:
1. Create a Linux virtual machine in Azure. For portal instructions, see [Quickstart: Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md). For PowerShell instructions, see [Quickstart: Create a Linux VM in Azure with PowerShell](../virtual-machines/linux/quick-create-powershell.md).
- You can use any Gen1 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images, with the exception of Red Hat Enterprise Linux (RHEL) images, which require extra steps. For a list of Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
+ You can use any Gen1 or Gen2 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images. This excludes Red Hat Enterprise Linux (RHEL) images, which require extra steps and can only be used to create a Gen1 VM image. For a list of Azure Marketplace images that could work, see [Commonly used Azure Marketplace images for Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
1. Deprovision the VM. Use the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see [Understanding and using Azure Linux Agent](../virtual-machines/extensions/agent-linux.md).
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Previously updated : 06/14/2021 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuSer
PS /home/user> ```
->[!IMPORTANT]
-> Use only the Gen 1 images. Any images specified as Gen 2 (usually the sku has a "-g2" suffix), do not work on Azure Stack Edge.
- In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥). :::image type="content" source="media/azure-stack-edge-create-virtual-machine-marketplace-image/marketplace-image-1.png" alt-text="List of marketplace images":::
databox-online Azure Stack Edge Gpu Create Virtual Switch Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-switch-powershell.md
Before you begin, make sure that:
The client machine should be running a [Supported OS](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device). -- Use the local UI to enable compute on one of the physical network interfaces on your device as per the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) on your device.
+- Use the local UI to enable compute on one of the physical network interfaces on your device as per the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) on your device.
## Connect to the PowerShell interface
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
In this tutorial, you learn how to:
Before you set up a compute role on your Azure Stack Edge Pro device: - Make sure that you've activated your Azure Stack Edge Pro device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).-- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and:
+- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) and:
- Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 04/06/2022 Last updated : 05/24/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
Follow these steps to configure the network for your device.
![Screenshot of local web UI "Network" tile for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-1.png)
- On your physical device, there are six network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3, PORT 4, PORT 5, and PORT 6 are all 25-Gbps network interfaces that can also serve as 10-Gbps network interfaces. PORT 1 is automatically configured as a management-only port, and PORT 2 to PORT 6 are all data ports. For a new device, the **Network settings** page is as shown below.
+ On your physical device, there are six network interfaces. PORT 1 and PORT 2 are 1-Gbps network interfaces. PORT 3, PORT 4, PORT 5, and PORT 6 are all 25-Gbps network interfaces that can also serve as 10-Gbps network interfaces. PORT 1 is automatically configured as a management-only port, and PORT 2 to PORT 6 are all data ports. For a new device, the **Network** page is as shown below.
![Screenshot of local web UI "Network" page for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-2a.png)
Follow these steps to configure the network for your device.
![Screenshot of local web UI "Port 3 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/network-4.png)
+ - By default for all the ports, it is expected that you'll set an IP. If you decide not to set an IP for a network interface on your device, you can set the IP to **No** and then **Modify** the settings.
+
+ ![Screenshot of local web UI "Port 2 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/set-ip-no.png)
+ As you configure the network settings, keep in mind: * Make sure that Port 5 and Port 6 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md).
Follow these steps to configure the network for your device.
* If DHCP isn't enabled, you can assign static IPs if needed. * You can configure your network interface as IPv4. * Serial number for any port corresponds to the node serial number. <!--* On 25-Gbps interfaces, you can set the RDMA (Remote Direct Access Memory) mode to iWarp or RoCE (RDMA over Converged Ethernet). Where low latencies are the primary requirement and scalability is not a concern, use RoCE. When latency is a key requirement, but ease-of-use and scalability are also high priorities, iWARP is the best candidate.-->
- <!--* Network Interface Card (NIC) Teaming or link aggregation is not supported with Azure Stack Edge. <!--NIC teaming should work for 2-node -->
+
> [!NOTE] > If you need to connect to your device from an outside network, see [Enable device access from outside network](azure-stack-edge-gpu-manage-access-power-connectivity-mode.md#enable-device-access-from-outside-network) for additional network settings.
Follow these steps to configure the network for your device.
After you have configured and applied the network settings, select **Next: Advanced networking** to configure compute network.
-## Configure virtual switches and compute IPs
+## Configure virtual switches
-Follow these steps to enable compute on a virtual switch and configure virtual networks.
+Follow these steps to add or delete virtual switches and virtual networks.
1. In the local UI, go to **Advanced networking** page.
-1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. Select **Add virtual switch** to create a new switch.
+1. In the **Virtual switch** section, you'll add or delete virtual switches. Select **Add virtual switch** to create a new switch.
![Screenshot of "Advanced networking" page in local UI for one node with Add virtual switch selected.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png)
Follow these steps to enable compute on a virtual switch and configure virtual n
1. Provide a name for your virtual switch. 1. Choose the network interface on which the virtual switch should be created. 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
-
- > [!TIP]
- > Use *CTRL + Click* to select more than one intent for your virtual switch.
-
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
-
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses.
-
- > [!IMPORTANT]
- > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
- > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
-
-1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
-
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-
-1. Select **Apply**.
-
- ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
-
-1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
+ 1. Select **Apply**. You can see that the specified virtual switch is created.
![Screenshot of "Advanced networking" page with virtual switch added and enabled for compute in local UI for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
+1. You can create more than one switch by following the steps described earlier.
+1. To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
-To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+You can now create virtual networks and associate with the virtual switches you created.
-> [!IMPORTANT]
-> Only one virtual switch can be assigned for compute.
-### Configure virtual network
+## Configure virtual networks
You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
You can add or delete virtual networks associated with your virtual switches. To
1. Provide a **Name** for your virtual network. 1. Enter a **VLAN ID** as a unique number in 1-4094 range. The VLAN ID that you provide should be in your trunk configuration. For more information on trunk configuration for your switch, refer to the instructions from your physical switch manufacturer. 1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration.
- 1. Select **Apply**.
+ 1. Select **Apply**. A virtual network is created on the specified virtual switch.
![Screenshot of how to add virtual network in "Advanced networking" page in local UI for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-virtual-network-one-node-1.png)
-To delete a virtual network, under the **Virtual network** section, select **Delete virtual network**.
+1. To delete a virtual network, under the **Virtual network** section, select **Delete virtual network** and select the virtual network you want to delete.
+
+1. Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
++
+## Configure compute IPs
+
+Follow these steps to configure compute IPs for your Kubernetes workloads.
+
+1. In the local UI, go to the **Kubernetes** page.
+
+1. From the dropdown select a virtual switch that you will use for Kubernetes compute traffic. <!--By default, all switches are configured for management. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.-->
+
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+ > - If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are on the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+1. Select **Apply**.
+
+ ![Screenshot of "Advanced networking" page in local UI with fully configured Add virtual switch blade for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
-Select **Next: Web proxy** to configure web proxy.
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
+
+1. Select **Next: Web proxy** to configure web proxy.
::: zone-end
To configure the network for a 2-node device, follow these steps on the first no
![Local web UI "Advanced networking" page for a new device 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-settings-1m.png)
+ By default for all the ports, it is expected that you'll set an IP. If you decide not to set an IP for a network interface on your device, you can set the IP to **No** and then **Modify** the settings.
+
+ ![Screenshot of local web UI "Port 2 Network settings" for one node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/set-ip-no.png)
+ As you configure the network settings, keep in mind: * Make sure that Port 5 and Port 6 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md).
For clients connecting via NFS protocol to the two-node device, follow these ste
> [!NOTE] > Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
-### Configure virtual switches and compute IPs
+### Configure virtual switches
-After the cluster is formed and configured, you'll now create new virtual switches or assign intent to the existing virtual switches that are created based on the selected network topology.
+After the cluster is formed and configured, you can now create new virtual switches.
> [!IMPORTANT] > On a two-node cluster, compute should only be configured on a virtual switch. 1. In the local UI, go to **Advanced networking** page.
-1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. You can select an existing virtual switch or select **Add virtual switch** to create a new switch.
+1. In the **Virtual switch** section, add or delete virtual switches. Select **Add virtual switch** to create a new switch.
![Configure compute page in Advanced networking in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png) 1. In the **Network settings** blade, if using a new switch, provide the following:
- 1. Provide a name for your virtual switch.
+ 1. Provide a name for your virtual switch.
1. Choose the network interface on which the virtual switch should be created. 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
- 1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
-
- > [!TIP]
- > Use *CTRL + Click* to select more than one intent for your virtual switch.
-
-1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
-
- For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses. For a two-node cluster, provide a minimum of 3 contiguous IPv4 addresses.
-
- > [!IMPORTANT]
- > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
- > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
-
-1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
-
- > [!IMPORTANT]
- > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
-
-1. Select **Apply**.
+ 1. Select **Apply**.
- ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+1. The configuration will take a couple minutes to apply and once the virtual switch is created, the list of virtual switches updates to reflect the newly created switch. You can see that the specified virtual switch is created and enabled for compute.
-1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
-
![Configure compute page in Advanced networking in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
+1. You can create more than one switch by following the steps described earlier.
-To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+1. To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
-> [!IMPORTANT]
-> Only one virtual switch can be assigned for compute.
+You can next create and associate virtual networks with your virtual switches.
### Configure virtual network
-You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
+You can add or delete virtual networks associated with your virtual switches. To add a virtual network, follow these steps:
1. In the local UI on the **Advanced networking** page, under the **Virtual network** section, select **Add virtual network**. 1. In the **Add virtual network** blade, input the following information:
You can add or delete virtual networks associated with your virtual switches. To
1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration. 1. Select **Apply**.
+ ![UPDATE THIS screen - Screenshot of how to add virtual network in "Advanced networking" page in local UI for two node.](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/add-virtual-network-one-node-1.png)
+
+1. To delete a virtual network, under the **Virtual network** section, select **Delete virtual network** and select the virtual network you want to delete.
+
+Select **Next: Kubernetes >** to next configure your compute IPs for Kubernetes.
+++
+## Configure compute IPs
+
+After the virtual switches are created, you can enable these switches for Kubernetes compute traffic.
+
+1. In the local UI, go to the **Kubernetes** page.
+1. From the dropdown list, select the virtual switch you want to enable for Kubernetes compute traffic.
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses. For a two-node cluster, provide a minimum of 3 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+1. Select **Apply**.
+
+ ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser.
-To delete a virtual network, under the **Virtual network** section, select **Delete virtual network**.
::: zone-end
This is an optional configuration. Although web proxy configuration is optional,
2. To validate and apply the configured web proxy settings, select **Apply**.
- ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-web-proxy-1.png)<!--UI text update for instruction text is needed.-->
+ ![Local web UI "Web proxy settings" page 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-web-proxy-1.png).
1. After the settings are applied, select **Next: Device**.
databox-online Azure Stack Edge Gpu Deploy Gpu Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
Previously updated : 08/03/2021 Last updated : 05/26/2022 #Customer intent: As an IT admin, I want the flexibility to deploy a single GPU virtual machine (VM) quickly in the portal or use templates to deploy and manage multiple GPU VMs efficiently on my Azure Stack Edge Pro GPU device.
Use the Azure portal to quickly deploy a single GPU VM. You can install the GPU
You can deploy a GPU VM via the portal or using Azure Resource Manager templates.
-For a list of supported operating systems, drivers, and VM sizes for GPU VMs, see [What are GPU virtual machines?](azure-stack-edge-gpu-overview-gpu-virtual-machines.md). For deployment considerations, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+For a list of supported operating systems, drivers, and VM sizes for GPU VMs, see [What are GPU virtual machines?](azure-stack-edge-gpu-overview-gpu-virtual-machines.md) For deployment considerations, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
> [!IMPORTANT]
-> If your device will be running Kubernetes, do not configure Kubernetes before you deploy your GPU VMs. If you configure Kubernetes first, it claims all the available GPU resources, and GPU VM creation will fail. For Kubernetes deployment considerations on 1-GPU and 2-GPU devices, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+> - Gen2 VMs are not supported for GPU.
+> - If your device will be running Kubernetes, do not configure Kubernetes before you deploy your GPU VMs. If you configure Kubernetes first, it claims all the available GPU resources, and GPU VM creation will fail. For Kubernetes deployment considerations on 1-GPU and 2-GPU devices, see [GPU VMs and Kubernetes](azure-stack-edge-gpu-overview-gpu-virtual-machines.md#gpu-vms-and-kubernetes).
+> - If you're running a Windows 2016 VHD, you must enable TLS 1.2 inside the VM before you install the GPU extension on 2205 and higher. For detailed steps, see [Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md#failure-to-install-gpu-extension-on-a-windows-2016-vhd).
### [Portal](#tab/portal)
databox-online Azure Stack Edge Gpu Deploy Virtual Machine High Performance Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md
Previously updated : 09/29/2021 Last updated : 05/19/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
In addition to the above prerequisites that are used for VM creation, you'll als
Follow these steps to create an HPN VM on your device.
-1. In the Azure portal of your Azure Stack Edge resource, [Add a VM image](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm-image). You'll use this VM image to create a VM in the next step.
+1. In the Azure portal of your Azure Stack Edge resource, [Add a VM image](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm-image). You'll use this VM image to create a VM in the next step. You can choose either Gen1 or Gen2 for the VM.
1. Follow all the steps in [Add a VM](azure-stack-edge-gpu-deploy-virtual-machine-portal.md#add-a-vm) with this configuration requirement.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Previously updated : 08/02/2021 Last updated : 05/26/2022 #Customer intent: As an IT admin, I need to understand how install GPU extension on GPU virtual machines (VMs) on my Azure Stack Edge Pro GPU device.
This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for installing a GPU extension using Azure Resource Manager templates on both Windows and Linux VMs. > [!NOTE]
-> In the Azure portal, you can install a GPU extension during VM creation or after the VM is deployed. For steps and requirements, see [Deploy GPU virtual machines](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
-
+> - In the Azure portal, you can install a GPU extension during VM creation or after the VM is deployed. For steps and requirements, see [Deploy GPU virtual machines](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+> - If you're running a Windows 2016 VHD, you must enable TLS 1.2 inside the VM before you install the GPU extension on 2205 and higher. For detailed steps, see [Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md#failure-to-install-gpu-extension-on-a-windows-2016-vhd).
## Prerequisites
Before you install GPU extension on the GPU VMs running on your device, make sur
- Make sure that the port enabled for compute network on your device is connected to Internet and has access. The GPU drivers are downloaded through the internet access.
- Here is an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes is not deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
+ Here's an example where Port 2 was connected to the internet and was used to enable the compute network. If Kubernetes isn't deployed on your environment, you can skip the Kubernetes node IP and external service IP assignment.
![Screenshot of the Compute pane for an Azure Stack Edge device. Compute settings for Port 2 are highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension/enable-compute-network-1.png) 1. [Download the GPU extension templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
-1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+1. Verify that the client you'll use to access your device is still connected to the Azure Resource Manager over Azure PowerShell. The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute will return error messages to the effect that you aren't connected to Azure anymore. You'll need to sign in again. For detailed instructions, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
## Edit parameters file Depending on the operating system for your VM, you could install GPU extension for Windows or for Linux. - ### [Windows](#tab/windows) To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtWindowsVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+#### Version 2205 and higher
+
+The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
+
+```json
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: windowsGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverWindows"
+ },
+ "typeHandlerVersion": {
+ "value": "1.5"
+ },
+ "settings": {
+ "value": {
+ "DriverURL" : "http://us.download.nvidia.com/tesla/511.65/511.65-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe",
+ "DriverCertificateUrl" : "https://go.microsoft.com/fwlink/?linkid=871664",
+ "DriverType":"CUDA"
+ }
+ }
+ }
+```
+
+#### Versions lower than 2205
+
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters: ```json
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
### [Linux](#tab/linux)
-To deploy Nvidia GPU drivers for an existing Linux VM, edit the parameters file and then deploy the template `addGPUextensiontoVM.json`.
+To deploy Nvidia GPU drivers for an existing Linux VM, edit the `addGPUExtWindowsVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+
+#### Version 2205 and higher
+
+If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters:
+
+```powershell
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: linuxGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverLinux"
+ },
+ "typeHandlerVersion": {
+ "value": "1.8"
+ },
+ "settings": {
+ }
+ }
+ }
+```
+
+#### Versions lower than 2205
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters:
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parame
} ```
-Here is a sample Ubuntu parameter file that was used in this article:
+Here's a sample Ubuntu parameter file that was used in this article:
```powershell {
Here is a sample Ubuntu parameter file that was used in this article:
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscription image (RHEL BYOS), make sure that: - You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md). -- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM is not properly registered, installation does not proceed as the VM is not entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.
+- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM isn't properly registered, installation doesn't proceed as the VM isn't entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.
- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscript
### [Windows](#tab/windows)
-Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+Deploy the template `addGPUextensiontoVM.json` to install the extension on an existing VM.
+
+Run the following command:
```powershell $templateFile = "<Path to addGPUextensiontoVM.json>" $templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
-$RGName = "<Name of your resource group>"
+RGName = "<Name of your resource group>"
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>" ``` > [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
-Here is a sample output:
-
-```powershell
-PS C:\WINDOWS\system32> "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
-C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json
-PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
-PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtWindowsVM.parameters.json"
-PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
-PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment3"
-
-DeploymentName : deployment3
-ResourceGroupName : myasegpuvm1
-ProvisioningState : Succeeded
-Timestamp : 12/16/2020 12:18:50 AM
-Mode : Incremental
-TemplateLink :
-Parameters :
+Here's a sample output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+ C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtWindowsVM.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+ PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment3"
+
+ DeploymentName : deployment3
+ ResourceGroupName : myasegpuvm1
+ ProvisioningState : Succeeded
+ Timestamp : 12/16/2020 12:18:50 AM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
Name Type Value =============== ========================= ========== vmName String VM2
Parameters :
"DriverType": "CUDA" }
-Outputs :
-DeploymentDebugLogLevel :
-PS C:\WINDOWS\system32>
-```
+ Outputs :
+ DeploymentDebugLogLevel :
+ PS C:\WINDOWS\system32>
+ ```
### [Linux](#tab/linux)
-Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+Deploy the template `addGPUextensiontoVM.json` to install the extension to an existing VM.
+
+Run the following command:
```powershell $templateFile = "Path to addGPUextensiontoVM.json"
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $tem
> [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
-Here is a sample output:
+Here's a sample output:
```powershell Copyright (C) Microsoft Corporation. All rights reserved.
Outputs :
DeploymentDebugLogLevel : PS C:\WINDOWS\system32> ```+ ## Track deployment ### [Windows](#tab/windows)
-To check the deployment state of extensions for a given VM, run the following command:
+To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
```powershell Get-AzureRmVMExtension -ResourceGroupName <Name of resource group> -VMName <Name of VM> -Name <Name of the extension> ```
-Here is a sample output:
+
+Here's a sample output:
```powershell PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM2 -Name windowsgpuext
A successful install is indicated by a `message` as `Enable Extension` and `stat
### [Linux](#tab/linux)
-Template deployment is a long running job. To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator). Run the following command:
+To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
```powershell Get-AzureRmVMExtension -ResourceGroupName myResourceGroup -VMName <VM Name> -Name <Extension Name> ```
-Here is a sample output:
+
+Here's a sample output:
```powershell Copyright (C) Microsoft Corporation. All rights reserved.
The extension execution output is logged to the following file: `/var/log/azure/
### [Windows](#tab/windows)
-Sign in to the VM and run the nvidia-smi command-line utility installed with the driver. The `nvidia-smi.exe` is located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`. If you do not see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
+Sign in to the VM and run the nvidia-smi command-line utility installed with the driver.
+
+#### Version 2205 and higher
+
+The `nvidia-smi.exe` is located at `C:\Windows\System32\nvidia-smi.exe`. If you don't see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
-If the driver is installed, you see an output similar to the following sample:
+#### Versions lower than 2205
+
+The `nvidia-smi.exe` is located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`. If you don't see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
+
+If the driver is installed, you see an output similar to the following sample:
```powershell PS C:\Users\Administrator> cd "C:\Program Files\NVIDIA Corporation\NVSMI"
Follow these steps to verify the driver installation:
1. Connect to the GPU VM. Follow the instructions in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-linux-vm).
- Here is a sample output:
+ Here's a sample output:
```powershell PS C:\WINDOWS\system32> ssh -l Administrator 10.57.50.60
Follow these steps to verify the driver installation:
Administrator@VM1:~$ ```
-2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you will be able to run the utility and see the following output:
+2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you'll be able to run the utility and see the following output:
```powershell Administrator@VM1:~$ nvidia-smi
For more information, see [Nvidia GPU driver extension for Linux](../virtual-mac
> [!NOTE] > After you finish installing the GPU driver and GPU extension, you no longer need to use a port with Internet access for compute. - - ## Remove GPU extension To remove the GPU extension, use the following command: `Remove-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Extension name>`
-Here is a sample output:
+Here's a sample output:
```powershell PS C:\azure-stack-edge-deploy-vms> Remove-AzureRmVMExtension -ResourceGroupName rgl -VMName WindowsVM -Name windowsgpuext
Requestld IsSuccessStatusCode StatusCode ReasonPhrase
True OK OK ``` - ## Next steps Learn how to:
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 04/11/2022 Last updated : 05/25/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
|Edge resource group |Select the resource group to add the image to. | |Save image as | The name for the VM image that you're creating from the VHD you uploaded to the storage account. | |OS type |Choose from Windows or Linux as the operating system of the VHD you'll use to create the VM image. |
+ |VM generation |Choose Gen 1 or Gen 2 as the generation of the image you'll use to create the VM. |
- ![Screenshot showing the Add image page for a virtual machine, with the Add button highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
+ ![Screenshot showing the Add image page for a virtual machine with the Add button highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-6.png)
1. The VHD is downloaded, and the VM image is created. Image creation takes several minutes to complete. You'll see a notification for the successful completion of the VM image.<!--There's a fleeting notification that image creation is in progress, but I didn't see any notification that image creation completed successfully.-->
Follow these steps to create a VM on your Azure Stack Edge Pro GPU device.
1. After the VM image is successfully created, it's added to the list of images on the **Images** pane.
- ![Screenshot that shows the Images pane in Virtual Machines view of an Azure Stack Edge device. The entry for a VM image is highlighted.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
+ ![Screenshot that shows the Images pane in Virtual Machines view of an Azure Stack Edge device.](media/azure-stack-edge-gpu-deploy-virtual-machine-portal/add-virtual-machine-image-9.png)
The **Deployments** pane updates to indicate the status of the deployment.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md
Previously updated : 03/08/2021 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs.
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
Location : DBELocal Tags :
- New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig201221071831 -ResourceGroupName rg201221071831
+ New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig201221071831 -ResourceGroupName rg201221071831 -HyperVGeneration V1
ResourceGroupName : rg201221071831 SourceVirtualMachine :
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 04/18/2022 Last updated : 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
You'll now create a VM image from the managed disk.
$DiskSize = "<Size greater than or equal to size of source managed disk>" $OsType = "<linux or windows>" $ImageName = "<Image name>"
+ $hyperVGeneration = "<Generation of the image: V1 or V2>"
``` 1. Create a VM image. The supported OS types are Linux and Windows. ```powershell
- $imageConfig = New-AzImageConfig -Location DBELocal
+ $imageConfig = New-AzImageConfig -Location DBELocal -HyperVGeneration $hyperVGeneration
$ManagedDiskId = (Get-AzDisk -Name $DiskName -ResourceGroupName $ResourceGroupName).Id Set-AzImageOsDisk -Image $imageConfig -OsType $OsType -OsState 'Generalized' -DiskSizeGB $DiskSize -ManagedDiskId $ManagedDiskId New-AzImage -Image $imageConfig -ImageName $ImageName -ResourceGroupName $ResourceGroupName
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
Previously updated : 04/22/2022 Last updated : 05/25/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
-# Deploy VMs on your Azure Stack Edge Pro GPU device via templates
+# Deploy VMs on your Azure Stack Edge Pro GPU device via templates
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
The file `CreateImage.parameters.json` takes the following parameters:
"imageUri": { "value": "<Path to the VHD that you uploaded in the Storage account>" },
+ "hyperVGeneration": {
+ "type": "string",
+ "value": "<Generation of the VM, V1 or V2>
+ },
} ``` Edit the file `CreateImage.parameters.json` to include the following values for your Azure Stack Edge Pro device:
-1. Provide the OS type corresponding to the VHD you'll upload. The OS type can be Windows or Linux.
+1. Provide the OS type and Hyper V Generation corresponding to the VHD you'll upload. The OS type can be Windows or Linux and the VM Generation can be V1 or V2.
```json "parameters": { "osType": { "value": "Windows"
- },
+ },
+ "hyperVGeneration": {
+ "value": "V2"
+ },
+ }
``` 2. Change the image URI to the URI of the image you uploaded in the earlier step:
Edit the file `CreateImage.parameters.json` to include the following values for
"osType": { "value": "Linux" },
+ "hyperVGeneration": {
+ "value": "V1"
+ },
"imageName": { "value": "myaselinuximg" }, "imageUri": { "value": "https://sa2.blob.myasegpuvm.wdshcsso.com/con1/ubuntu18.04waagent.vhd"
- }
+ }
} } ```
databox-online Azure Stack Edge Gpu Prepare Windows Generalized Image Iso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md
To create your new virtual machine, follow these steps:
![New Virtual Machine wizard, Specify Name and Location](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-08.png)
-4. Under **Specify Generation**, select **Generation 1**. Then select **Next >**.
+4. Under **Specify Generation**, select **Generation 1** or **Generation 2**. Then select **Next >**.
![New Virtual Machine wizard, Choose the generation of virtual machine to create](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-09.png)
databox-online Azure Stack Edge Gpu Prepare Windows Vhd Generalized Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md
Previously updated : 06/18/2021 Last updated : 05/18/2022 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
You'll use this fixed-size VHD for all the subsequent steps in this article.
![Specify name and location for your VM](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-2.png)
-1. On the **Specify generation** page, choose **Generation 1** for the .vhd device image type, and then select **Next**.
+1. On the **Specify generation** page, choose **Generation 1** or **Generation 2** for the .vhd device image type, and then select **Next**.
![Specify generation](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-3.png)
databox-online Azure Stack Edge Gpu Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-quickstart.md
Before you deploy, make sure that following prerequisites are in place:
5. **Configure compute network**: Create a virtual switch by enabling a port on your device. Enter 2 free, contiguous static IPs for Kubernetes nodes in the same network that you created the switch. Provide at least 1 static IP for IoT Edge Hub service to access compute modules and 1 static IP for each extra service or container that you want to access from outside the Kubernetes cluster.
- Kubernetes is required to deploy all containerized workloads. See more information on [Compute network settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips).
+ Kubernetes is required to deploy all containerized workloads. See more information on [Compute network settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches).
6. **Configure web proxy**: If you use web proxy in your environment, enter web proxy server IP in `http://<web-proxy-server-FQDN>:<port-id>`. Set authentication to **None**. See more information on [Web proxy settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-web-proxy).
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
Previously updated : 08/02/2021 Last updated : 05/26/2022 # Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU
This article gives guidance for resolving the most common issues that cause inst
For installation steps, see [Install GPU extension](./azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md?tabs=linux).
+## In versions lower than 2205, Linux GPU extension installs old signing keys: signature and/or required key missing
+
+**Error description:** The Linux GPU extension installs old signing keys, preventing download of the required GPU driver. In this case, you'll see the following error in the syslog of the Linux VM:
+
+ ```powershell
+ /var/log/syslog and /var/log/waagent.log
+ May  5 06:04:53 gpuvm12 kernel: [  833.601805] nvidia:module verification failed: signature and/or required key missing- tainting kernel
+ ```
+**Suggested solutions:** You have two options to mitigate this issue:
+
+- **Option 1:** Apply the Azure Stack Edge 2205 updates to your device.
+- **Option 2:** After creating a GPU virtual machine of size in NCasT4_v3-series, manually install the new signing keys before installing the extension, then set required signing keys using steps in [Updating the CUDA Linux GPG Repository Key | NVIDIA Technical Blog](https://developer.nvidia.com/blog/updating-the-cuda-linux-gpg-repository-key/).
+
+ Here's an example that installs signing keys on an Ubuntu 1804 virtual machine:
+
+ ```powershell
+ $ sudo apt-key adv --fetch-
+ keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
+ ```
+
+## Failure to install GPU extension on a Windows 2016 VHD
+
+**Error description:** This is a known issue in versions lower than 2205. The GPU extension requires TLS 1.2. In this case, you may see the following error message:
+
+ ```azurecli
+ Failed to download https://go.microsoft.com/fwlink/?linkid=871664 after 10 attempts. Exiting!
+ ```
+
+Additional details:
+
+- Check the guest log for the associated error. To collect the guest logs, see [Collect guest logs for VMs on an Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-collect-virtual-machine-guest-logs.md).
+- On a Linux VM, look in `/var/log/waagent.log` or `/var/log/azure/nvidia-vmext-status`.
+- On a Windows VM, find the error status in `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status`.
+- Review the complete execution log in `C:\WindowsAzure\Logs\WaAppAgent.txt`.
+
+If the installation failed during the package download, that error indicates the VM couldn't access the public network to download the driver.
++
+**Suggested solution:** Use the following steps to enable TLS 1.2 on a Windows 2016 VM, and then deploy the GPU extension.
+
+1. Run the following command inside the VM to enable TLS 1.2:
+
+ ```powershell
+ sp hklm:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319 SchUseStrongCrypto 1
+ ```
+
+1. Deploy the template `addGPUextensiontoVM.json` to install the extension on an existing VM. You can install the extension manually, or you can install the extension from the Azure portal.
+
+ - To install the extension manually, see [Install GPU extension on VMs for your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md)
+ - To install the template using the Azure portal, see [Deploy GPU VMs on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md).
+
+ > [!NOTE]
+ > The extension deployment is a long running job and takes about 10 minutes to complete.
+
+## Manually install the Nvidia driver on RHEL 7
+
+**Error description:** When installing the GPU extension on an RHEL 7 VM, the installation may fail due to a certificate rotation issue and an incompatible driver version.
+
+**Suggested solution:** In this case, you have two options:
+
+- **Option 1:** Resolve the certificate rotation issue and then install an Nvidia driver lower than version 510.
+
+ 1. To resolve the certificate rotation issue, run the following command:
+
+ ```powershell
+ $ sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/$arch/cuda-rhel7.repo
+ ```
+
+ 1. Install an Nvidia driver lower than version 510.
+
+- **Option 2:** Deploy the GPU extension. Use the following settings when deploying the ARM extension:
+
+ ```powershell
+ settings": {
+ "isCustomInstall": true,
+ "InstallMethod": 0,
+ "DRIVER_URL": " https://developer.download.nvidia.com/compute/cuda/11.4.4/local_installers/cuda-repo-rhel7-11-4-local-11.4.4_470.82.01-1.x86_64.rpm",
+ "DKMS_URL" : " https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm",
+ "LIS_URL": " https://aka.ms/lis",
+ "LIS_RHEL_ver": "3.10.0-1062.9.1.el7"
+ }
+ ```
+ ## VM size is not GPU VM size **Error description:** A GPU VM must be either Standard_NC4as_T4_v3 or Standard_NC8as_T4_v3 size. If any other VM size is used, the GPU extension will fail to be attached.
databox-online Azure Stack Edge Gpu Virtual Machine Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-overview.md
Previously updated : 04/21/2022 Last updated : 05/18/2022
You can run a maximum of 24 VMs on your device. This is another factor to consid
### Operating system disks and images
-On your device, you can only use Generation 1 VMs with a fixed virtual hard disk (VHD) format. VHDs are used to store the machine operating system (OS) and data. VHDs are also used for the images you use to install an OS.
+On your device, you can use Generation 1 or Generation 2 VMs with a fixed virtual hard disk (VHD) format. VHDs are used to store the machine operating system (OS) and data. VHDs are also used for the images you use to install an OS.
The images that you use to create VM images can be generalized or specialized. When creating images for your VMs, you must prepare the images. See the various ways to prepare and use VM images on your device:
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
In this tutorial, you learn how to:
Before you set up a compute role on your Azure Stack Edge Pro device, make sure that: - You've activated your Azure Stack Edge Pro 2 device as described in [Activate Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-activate.md).-- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and:
+- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches) and:
- Enabled a network interface for compute. - Assigned Kubernetes node IPs and Kubernetes external service IPs.
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
> > :::image type="content" source="media/defender-for-containers/enable-defender-for-containers.png" alt-text="Enable Microsoft Defender for Containers from the Defender plans page."::: >
-> Learn more about this change in [the release note](release-notes.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
+> Learn more about this change in [the release note](release-notes-archive.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
|Aspect|Details| |-|:-|
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
> > :::image type="content" source="media/defender-for-containers/enable-defender-for-containers.png" alt-text="Enable Microsoft Defender for Containers from the Defender plans page."::: >
-> Learn more about this change in [the release note](release-notes.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
+> Learn more about this change in [the release note](release-notes-archive.md#microsoft-defender-for-containers-plan-released-for-general-availability-ga).
|Aspect|Details|
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
+
+ Title: Driving your organization to remediate security issues with recommendation governance in Microsoft Defender for Cloud
+description: Learn how to assign owners and due dates to security recommendations and create rules to automatically assign owners and due dates
+++++ Last updated : 05/29/2022+
+# Drive your organization to remediate security recommendations with governance
+
+Security teams are responsible for improving the security posture of their organizations but they may not have the resources or authority to actually implement security recommendations. [Assigning owners with due dates](#manually-assigning-owners-and-due-dates-for-recommendation-remediation) and [defining governance rules](#building-an-automated-process-for-improving-security-with-governance-rules) creates accountability and transparency so you can drive the process of improving the security posture in your organization.
+
+Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations.
+
+## Building an automated process for improving security with governance rules
+
+To make sure your organization is systematically improving its security posture, you can define rules that assign an owner and set the due date for resources in the specified recommendations. That way resource owners have a clear set of tasks and deadlines for remediating recommendations.
+
+You can then review the progress of the tasks by subscription, recommendation, or owner so you can follow up with tasks that need more attention.
+
+### Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
+|Pricing:|Free|
+|Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
+
+### Defining governance rules to automatically set the owner and due date of recommendations
+
+Governance rules can identify resources that require remediation according to specific recommendations or severities, and the rule assigns an owner and due date to make sure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
+
+The due date set for the recommendation to be remediated is based on a timeframe of 7, 14, 30, or 90 days from when the recommendation is found by the rule. For example, if the rule identifies the resource on March 1st and the remediation timeframe is 14 days, March 15th is the due date. You can apply a grace period so that the resources that are given a due date don't impact your secure score until they're overdue.
+
+You can also set the owner of the resources that are affected by the specified recommendations. In organizations that use resource tags to associate resources with an owner, you can specify the tag key and the governance rule reads the name of the resource owner from the tag.
+
+By default, email notifications are sent to the resource owners weekly to provide a list of the on time and overdue tasks. If an email for the owner's manager is found in the organizational Azure Active Directory (Azure AD), the owner's manager receives a weekly email showing any overdue recommendations by default.
++
+To define a governance rule that assigns an owner and due date:
+
+1. In the **Environment settings**, select the Azure subscription, AWS account, or Google project that you want to define the rule for.
+1. In **Governance rules (preview)**, select **Add rule**.
+1. Enter a name for the rule.
+1. Set a priority for the rule. You can see the priority for the existing rules in the list of governance rules.
+1. Select the recommendations that the rule applies to, either:
+ - **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.
+ - **By name** - Select the specific recommendations that the rule applies to.
+1. Set the owner to assign to the recommendations either:
+ - **By resource tag** - Enter the resource tag on your resources that defines the resource owner.
+ - **By email address** - Enter the email address of the owner to assign to the recommendations.
+1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due.
+1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
+1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options.
+1. Select **Create**.
+
+If there are existing recommendations that match the definition of the governance rule, you can either:
+
+- Assign an owner and due date to recommendations that don't already have an owner or due date.
+- Overwrite the owner and due date of existing recommendations.
+
+## Manually assigning owners and due dates for recommendation remediation
+
+For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that are given a due date don't impact your secure score unless they become overdue.
+
+To manually assign owners and due dates to recommendations:
+
+1. Go to the list of recommendations:
+ - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
+ - Go to **Recommendations** in the Defender for Cloud menu.
+1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score.
+
+ > [!TIP]
+ > You can also use the search box and filters above the list of recommendations to find specific recommendations.
+
+1. Select a recommendation to see the affected resources.
+1. For any resource that doesn't have an owner or due date, select the resources and select **Assign owner**.
+1. Enter the email address of the owner that needs to make the changes that remediate the recommendation for those resources.
+1. Select the date by which to remediate the recommendation for the resources.
+1. You can select **Apply grace period** to keep the resource from impacting the secure score until it's overdue.
+1. Select **Save**.
+
+The recommendation is now shown as assigned and on time.
+
+## Tracking the status of recommendations for further action
+
+After you define governance rules, you'll want to review the progress that the owners are making in remediating the recommendations.
+
+You can track the assigned and overdue recommendations in:
+
+- The security posture shows the number of unassigned and overdue recommendations.
+
+ :::image type="content" source="./media/governance-rules/governance-in-security-posture.png" alt-text="Screenshot of governance status in the security posture.":::
+
+- The list of recommendations shows the governance status of each recommendation.
+
+ :::image type="content" source="./media/governance-rules/governance-in-recommendations.png" alt-text="Screenshot of recommendations with their governance status." lightbox="media/governance-rules/governance-in-recommendations.png":::
+
+- The governance report in the governance rules settings lets you drill down into recommendations by rule and owner.
+
+ :::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png":::
+
+### Tracking progress by rule with the governance report
+
+The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned.
+
+To review the status of the recommendations in a rule:
+
+1. In **Recommendations**, select **Governance report (preview)**.
+1. Select the subscriptions that you want to review.
+1. Select the rules that you want to see details about.
+
+You can see the list of owners and recommendations for the selected rules, and their status.
+
+To see the list of recommendations for each owner:
+
+1. Select **Security posture**.
+1. Select the **Owner (preview)** tab to see the list of owners and the number of overdue recommendations for each owner.
+
+ - Hover over the (i) in the overdue recommendations to see the breakdown of overdue recommendations by severity.
+
+ - If the owner email address is found in the organizational Azure Active Directory (Azure AD), you'll see the full name and picture of the owner.
+
+1. Select **View recommendations** to go to the list of recommendations associated with the owner.
+
+## Next steps
+
+In this article, you learned how to set up a process for assigning owners and due dates to tasks so that owners are accountable for taking steps to improve your security posture.
+
+Check out how owners can [set ETAs for tasks](review-security-recommendations.md#manage-the-owner-and-eta-of-recommendations-that-are-assigned-to-you) so that they can manage their progress.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Implement security recommendations in Microsoft Defender for Cloud | Microsoft Docs description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies.-+ Last updated 11/09/2021
To simplify remediation and improve your environment's security (and increase yo
> [!TIP] > The **Fix** feature is only available for specific recommendations. To find recommendations that have an available fix, use the **Response actions** filter for the list of recommendations:
->
+>
> :::image type="content" source="media/implement-security-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the Fix option."::: To implement a **Fix**:
-1. From the list of recommendations that have the **Fix** action icon, :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation.
+1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation.
:::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
To implement a **Fix**:
The remediation operation uses a template deployment or REST API `PATCH` request to apply the configuration on the resource. These operations are logged in [Azure activity log](../azure-monitor/essentials/activity-log.md). - ## Next steps In this document, you were shown how to remediate recommendations in Defender for Cloud. To learn how recommendations are defined and selected for your environment, see the following page:
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 06/01/2022 Last updated : 06/06/2022 zone_pivot_groups: connect-gcp-accounts
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## December 2021
+
+Updates in December include:
+
+- [Microsoft Defender for Containers plan released for general availability (GA)](#microsoft-defender-for-containers-plan-released-for-general-availability-ga)
+- [New alerts for Microsoft Defender for Storage released for general availability (GA)](#new-alerts-for-microsoft-defender-for-storage-released-for-general-availability-ga)
+- [Improvements to alerts for Microsoft Defender for Storage](#improvements-to-alerts-for-microsoft-defender-for-storage)
+- ['PortSweeping' alert removed from network layer alerts](#portsweeping-alert-removed-from-network-layer-alerts)
+
+### Microsoft Defender for Containers plan released for general availability (GA)
+
+Over two years ago, we introduced [Defender for Kubernetes](defender-for-kubernetes-introduction.md) and [Defender for container registries](defender-for-container-registries-introduction.md) as part of the Azure Defender offering within Microsoft Defender for Cloud.
+
+With the release of [Microsoft Defender for Containers](defender-for-containers-introduction.md), we've merged these two existing Defender plans.
+
+The new plan:
+
+- **Combines the features of the two existing plans** - threat detection for Kubernetes clusters and vulnerability assessment for images stored in container registries
+- **Brings new and improved features** - including multicloud support, host level threat detection with over **sixty** new Kubernetes-aware analytics, and vulnerability assessment for running images
+- **Introduces Kubernetes-native at-scale onboarding** - by default, when you enable the plan all relevant components are configured to be deployed automatically
+
+With this release, the availability and presentation of Defender for Kubernetes and Defender for container registries has changed as follows:
+
+- New subscriptions - The two previous container plans are no longer available
+- Existing subscriptions - Wherever they appear in the Azure portal, the plans are shown as **Deprecated** with instructions for how to upgrade to the newer plan
+ :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
+
+The new plan is free for the month of December 2021. For the potential changes to the billing from the old plans to Defender for Containers, and for more information on the benefits introduced with this plan, see [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317).
+
+For more information, see:
+
+- [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md)
+- [Enable Microsoft Defender for Containers](defender-for-containers-enable.md)
+- [Introducing Microsoft Defender for Containers - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)
+- [Microsoft Defender for Containers | Defender for Cloud in the Field #3 - YouTube](https://www.youtube.com/watch?v=KeH0a3enLJ0&t=201s)
+
+### New alerts for Microsoft Defender for Storage released for general availability (GA)
+
+Threat actors use tools and scripts to scan for publicly open containers in the hope of finding misconfigured open storage containers with sensitive data.
+
+Microsoft Defender for Storage detects these scanners so that you can block them and remediate your posture.
+
+The preview alert that detected this was called **ΓÇ£Anonymous scan of public storage containersΓÇ¥**. To provide greater clarity about the suspicious events discovered, we've divided this into **two** new alerts. These alerts are relevant to Azure Blob Storage only.
+
+We've improved the detection logic, updated the alert metadata, and changed the alert name and alert type.
+
+These are the new alerts:
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|||--|-|
+| **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium |
+| **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low |
+
+For more information, see:
+
+- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)
+- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)
+- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)
+
+### Improvements to alerts for Microsoft Defender for Storage
+
+The initial access alerts now have improved accuracy and more data to support investigation.
+
+Threat actors use various techniques in the initial access to gain a foothold within a network. Two of the [Microsoft Defender for Storage](defender-for-storage-introduction.md) alerts that detect behavioral anomalies in this stage now have improved detection logic and additional data to support investigations.
+
+If you've [configured automations](workflow-automation.md) or defined [alert suppression rules](alerts-suppression-rules.md) for these alerts in the past, update them in accordance with these changes.
+
+#### Detecting access from a Tor exit node
+
+Access from a Tor exit node might indicate a threat actor trying to hide their identity.
+
+The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+
+An outlying pattern will have high severity, while less anomalous patterns will have medium severity.
+
+The alert name and description have been updated. The AlertType remains unchanged.
+
+- Alert name (old): Access from a Tor exit node to a storage account
+- Alert name (new): Authenticated access from a Tor exit node
+- Alert types: Storage.Blob_TorAnomaly / Storage.Files_TorAnomaly
+- Description: One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2
+- MITRE tactic: Initial access
+- Severity: High/Medium
+
+#### Unusual unauthenticated access
+
+A change in access patterns may indicate that a threat actor was able to exploit public read access to storage containers, either by exploiting a mistake in access configurations, or by changing the access permissions.
+
+This medium severity alert is now tuned with improved behavioral logic, higher accuracy, and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
+
+The alert name and description have been updated. The AlertType remains unchanged.
+
+- Alert name (old): Anonymous access to a storage account
+- Alert name (new): Unusual unauthenticated access to a storage container
+- Alert types: Storage.Blob_AnonymousAccessAnomaly
+- Description: This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s). Applies to: Azure Blob Storage
+- MITRE tactic: Collection
+- Severity: Medium
+
+For more information, see:
+
+- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)
+- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)
+- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)
+
+### 'PortSweeping' alert removed from network layer alerts
+
+The following alert was removed from our network layer alerts due to inefficiencies:
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+||-|:--:||
+| **Possible outgoing port scanning activity detected**<br>(PortSweeping) | Network traffic analysis detected suspicious outgoing traffic from %{Compromised Host}. This traffic may be a result of a port scanning activity. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). If this behavior is intentional, please note that performing port scanning is against Azure Terms of service. If this behavior is unintentional, it may mean your resource has been compromised. | Discovery | Medium |
+ ## November 2021 Our Ignite release includes:
Other changes in November include:
- [New AKS security policy added to default initiative ΓÇô for use by private preview customers only](#new-aks-security-policy-added-to-default-initiative--for-use-by-private-preview-customers-only) - [Inventory display of on-premises machines applies different template for resource name](#inventory-display-of-on-premises-machines-applies-different-template-for-resource-name) - ### Azure Security Center and Azure Defender become Microsoft Defender for Cloud According to the [2021 State of the Cloud report](https://info.flexera.com/CM-REPORT-State-of-the-Cloud#download), 92% of organizations now have a multicloud strategy. At Microsoft, our goal is to centralize security across these environments and help security teams work more effectively.
According to the [2021 State of the Cloud report](https://info.flexera.com/CM-RE
At Ignite 2019, we shared our vision to create the most complete approach for securing your digital estate and integrating XDR technologies under the Microsoft Defender brand. Unifying Azure Security Center and Azure Defender under the new name **Microsoft Defender for Cloud**, reflects the integrated capabilities of our security offering and our ability to support any cloud platform. - ### Native CSPM for AWS and threat protection for Amazon EKS, and AWS EC2
-A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
+A new **environment settings** page provides greater visibility and control over your management groups, subscriptions, and AWS accounts. The page is designed to onboard AWS accounts at scale: connect your AWS **management account**, and you'll automatically onboard existing and future accounts.
:::image type="content" source="media/release-notes/add-aws-account.png" alt-text="Use the new environment settings page to connect your AWS accounts.":::
When you've added your AWS accounts, Defender for Cloud protects your AWS resour
Learn more about [connecting your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). - ### Prioritize security actions by data sensitivity (powered by Microsoft Purview) (in preview)+ Data resources remain a popular target for threat actors. So it's crucial for security teams to identify, prioritize, and secure sensitive data resources across their cloud environments. To address this challenge, Microsoft Defender for Cloud now integrates sensitivity information from [Microsoft Purview](../purview/overview.md). Microsoft Purview is a unified data governance service that provides rich insights into the sensitivity of your data within multicloud, and on-premises workloads.
The integration with Microsoft Purview extends your security visibility in Defen
Learn more in [Prioritize security actions by data sensitivity](information-protection.md). - ### Expanded security control assessments with Azure Security Benchmark v3
-Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
+
+Microsoft Defender for Cloud's security recommendations are enabled and supported by the Azure Security Benchmark.
[Azure Security Benchmark](../security/benchmarks/introduction.md) is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. From Ignite 2021, Azure Security Benchmark **v3** is available in [Defender for Cloud's regulatory compliance dashboard](update-regulatory-compliance-packages.md) and enabled as the new default initiative for all Azure subscriptions protected with Microsoft
-Defender for Cloud.
+Defender for Cloud.
-Enhancements for v3 include:
+Enhancements for v3 include:
- Additional mappings to industry frameworks [PCI-DSS v3.2.1](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2-1.pdf) and [CIS Controls v8](https://www.cisecurity.org/controls/v8/). - More granular and actionable guidance for controls with the introduction of:
- - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
- - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
+ - **Security Principles** - Providing insight into the overall security objectives that build the foundation for our recommendations.
+ - **Azure Guidance** - The technical ΓÇ£how-toΓÇ¥ for meeting these objectives.
- New controls include DevOps security for issues such as threat modeling and software supply chain security, as well as key and certificate management for best practices in Azure. Learn more in [Introduction to Azure Security Benchmark](/security/benchmark/azure/introduction). - ### Microsoft Sentinel connector's optional bi-directional alert synchronization released for general availability (GA) In July, [we announced](release-notes-archive.md#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview) a preview feature, **bi-directional alert synchronization**, for the built-in connector in [Microsoft Sentinel](../sentinel/index.yml) (Microsoft's cloud-native SIEM and SOAR solution). This feature is now released for general availability (GA).
SecOps teams can choose the relevant Microsoft Sentinel workspace directly from
The new recommendation, "Diagnostic logs in Kubernetes services should be enabled" includes the 'Fix' option for faster remediation.
-We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
-
+We've also enhanced the "Auditing on SQL server should be enabled" recommendation with the same Sentinel streaming capabilities.
### Recommendations mapped to the MITRE ATT&CK® framework - released for general availability (GA)
In October, [we announced](release-notes-archive.md#microsoft-threat-and-vulnera
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
-Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
+Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](release-notes-archive.md#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
When Defender for Endpoint detects a threat, it triggers an alert. The alert is
Learn more in [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). - ### Snapshot export for recommendations and security findings (in preview) Defender for Cloud generates detailed security alerts and recommendations. You can view them in the portal or through programmatic tools. You might also need to export some or all of this information for tracking with other monitoring tools in your environment.
In October, [we announced](release-notes-archive.md#software-inventory-filters-a
You can query the software inventory data in **Azure Resource Graph Explorer**.
-To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+To use these features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory).
To ensure that Kubernetes workloads are secure by default, Defender for Cloud in
As part of this project, we've added a policy and recommendation (disabled by default) for gating deployment on Kubernetes clusters. The policy is in the default initiative but is only relevant for organizations who register for the related private preview.
-You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
+You can safely ignore the policies and recommendation ("Kubernetes clusters should gate deployment of vulnerable images") and there will be no impact on your environment.
If you'd like to participate in the private preview, you'll need to be a member of the private preview ring. If you're not already a member, submit a request [here](https://aka.ms/atscale). Members will be notified when the preview begins.
Updates in October include:
- [Recommendations details pages now show related recommendations](#recommendations-details-pages-now-show-related-recommendations) - [New alerts for Azure Defender for Kubernetes (in preview)](#new-alerts-for-azure-defender-for-kubernetes-in-preview) - ### Microsoft Threat and Vulnerability Management added as vulnerability assessment solution (in preview)
-We've extended the integration between [Azure Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
+We've extended the integration between [Azure Defender for Servers](defender-for-servers-introduction.md) and Microsoft Defender for Endpoint, to support a new vulnerability assessment provider for your machines: [Microsoft threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt).
Use **threat and vulnerability management** to discover vulnerabilities and misconfigurations in near real time with the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) enabled, and without the need for additional agents or periodic scans. Threat and vulnerability management prioritizes vulnerabilities based on the threat landscape and detections in your organization.
-Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
+Use the security recommendation "[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ffff0522-1e88-47fc-8382-2a80ba848f5d)" to surface the vulnerabilities detected by threat and vulnerability management for your [supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os?view=o365-worldwide&preserve-view=true).
To automatically surface the vulnerabilities, on existing and new machines, without the need to manually remediate the recommendation, see [Vulnerability assessment solutions can now be auto enabled (in preview)](#vulnerability-assessment-solutions-can-now-be-auto-enabled-in-preview).
Learn more in [Automatically configure vulnerability assessment for your machine
### Software inventory filters added to asset inventory (in preview)
-The [asset inventory](asset-inventory.md) page now includes a filter to select machines running specific software - and even specify the versions of interest.
+The [asset inventory](asset-inventory.md) page now includes a filter to select machines running specific software - and even specify the versions of interest.
Additionally, you can query the software inventory data in **Azure Resource Graph Explorer**.
-To use these new features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
+To use these new features, you'll need to enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
For full details, including sample Kusto queries for Azure Resource Graph, see [Access a software inventory](asset-inventory.md#access-a-software-inventory). :::image type="content" source="media/deploy-vulnerability-assessment-tvm/software-inventory.png" alt-text="If you've enabled the threat and vulnerability solution, Security Center's asset inventory offers a filter to select resources by their installed software.":::
-### Changed prefix of some alert types from "ARM_" to "VM_"
+### Changed prefix of some alert types from "ARM_" to "VM_"
-In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+In July 2021, we announced a [logical reorganization of Azure Defender for Resource Manager alerts](release-notes-archive.md#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
As part of a logical reorganization of some of the Azure Defender plans, we moved twenty-one alerts from [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) to [Azure Defender for Servers](defender-for-servers-introduction.md).
With this update, we've changed the prefixes of these alerts to match this reass
| ARM_VMAccessUnusualPasswordReset | VM_VMAccessUnusualPasswordReset | | ARM_VMAccessUnusualSSHReset | VM_VMAccessUnusualSSHReset | - Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for Servers](defender-for-servers-introduction.md) plans. ### Changes to the logic of a security recommendation for Kubernetes clusters
-The recommendation "Kubernetes clusters should not use the default namespace" prevents usage of the default namespace for a range of resource types. Two of the resource types that were included in this recommendation have been removed: ConfigMap and Secret.
+The recommendation "Kubernetes clusters should not use the default namespace" prevents usage of the default namespace for a range of resource types. Two of the resource types that were included in this recommendation have been removed: ConfigMap and Secret.
Learn more about this recommendation and hardening your Kubernetes clusters in [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md). ### Recommendations details pages now show related recommendations
-To clarify the relationships between different recommendations, we've added a **Related recommendations** area to the details pages of many recommendations.
+To clarify the relationships between different recommendations, we've added a **Related recommendations** area to the details pages of many recommendations.
The three relationship types that are shown on these pages are:
Obviously, Security Center can't notify you about discovered vulnerabilities unl
Therefore:
+- Recommendation #1 is a prerequisite for recommendation #2
+- Recommendation #2 depends upon recommendation #1
:::image type="content" source="media/release-notes/related-recommendations-solution-not-found.png" alt-text="Screenshot of recommendation to deploy vulnerability assessment solution."::: :::image type="content" source="media/release-notes/related-recommendations-vulnerabilities-found.png" alt-text="Screenshot of recommendation to resolve discovered vulnerabilities."::: -- ### New alerts for Azure Defender for Kubernetes (in preview) To expand the threat protections provided by Azure Defender for Kubernetes, we've added two preview alerts.
These alerts are generated based on a new machine learning model and Kubernetes
| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment that is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored by this analytics include the container image registry used, the account performing the deployment, day of the week, how often does this account performs pod deployments, user agent used in the operation, is this a namespace which is pod deployment occur to often, or other feature. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | Execution | Medium | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. From examining role assignments, the listed permissions are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Azure Defender. | Privilege Escalation | Low | - For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-k8scluster). ## September 2021
We've added two **preview** recommendations to deploy and maintain the endpoint
|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. <br> <a href="/azure/defender-for-cloud/endpoint-protection-recommendations-technical">Learn more about how Endpoint Protection for machines is evaluated.</a><br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |High | |[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium | - > [!NOTE] > The recommendations show their freshness interval as 8 hours, but there are some scenarios in which this might take significantly longer. For example, when an on premises machine is deleted, it takes 24 hours for Security Center to identify the deletion. After that, the assessment will take up to 8 hours to return the information. In that specific situation therefore, it may take 32 hours for the machine to be removed from the list of affected resources. >
A new, dedicated area of the Security Center pages in the Azure portal provides
When you're facing an issue, or are seeking advice from our support team, **Diagnose and solve problems** is another tool to help you find the solution: :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Security Center's 'Diagnose and solve problems' page":::
-
### Regulatory compliance dashboard's Azure Audit reports released for general availability (GA)
-The regulatory compliance dashboard's toolbar offers Azure and Dynamics certification reports for the standards applied to your subscriptions.
+The regulatory compliance dashboard's toolbar offers Azure and Dynamics certification reports for the standards applied to your subscriptions.
:::image type="content" source="media/release-notes/audit-reports-regulatory-compliance-dashboard.png" alt-text="Regulatory compliance dashboard's toolbar showing the button for generating audit reports.":::
It's likely that this change will impact your secure scores. For most subscripti
> [!TIP] > The [asset inventory](asset-inventory.md) page was also affected by this change as it displays the monitored status for machines (monitored, not monitored, or partially monitored - a state which refers to an agent with health issues). - ### Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link+ Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md). To limit access to a registry hosted in Azure Container Registry, assign virtual network private IP addresses to the registry endpoints and use Azure Private Link as explained in [Connect privately to an Azure container registry using Azure Private Link](../container-registry/container-registry-private-link.md). As part of our ongoing efforts to support additional environments and use cases, Azure Defender now also scans container registries protected with [Azure Private Link](../private-link/private-link-overview.md). - ### Security Center can now auto provision the Azure Policy's Guest Configuration extension (in preview)+ Azure Policy can audit settings inside a machine, both for machines running in Azure and Arc connected machines. The validation is performed by the Guest Configuration extension and client. Learn more in [Understand Azure Policy's Guest Configuration](../governance/policy/concepts/guest-configuration.md).
-With this update, you can now set Security Center to automatically provision this extension to all supported machines.
+With this update, you can now set Security Center to automatically provision this extension to all supported machines.
:::image type="content" source="media/release-notes/auto-provisioning-guest-configuration.png" alt-text="Enable auto deployment of Guest Configuration extension."::: Learn more about how auto provisioning works in [Configure auto provisioning for agents and extensions](enable-data-collection.md). ### Recommendations to enable Azure Defender plans now support "Enforce"+ Security Center includes two features that help ensure newly created resources are provisioned in a secure manner: **enforce** and **deny**. When a recommendation offers these options, you can ensure your security requirements are met whenever someone attempts to create a resource: - **Deny** stops unhealthy resources from being created
If you need to export larger amounts of data, use the available filters before s
Learn more about [performing a CSV export of your security recommendations](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations). -- ### Recommendations page now includes multiple views The recommendations page now has two tabs to provide alternate ways to view the recommendations relevant to your resources:
The recommendations page now has two tabs to provide alternate ways to view the
Updates in July include: - [Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)](#azure-sentinel-connector-now-includes-optional-bi-directional-alert-synchronization-in-preview)-- [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts) -- [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade)
+- [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts)
+- [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade)
- [Continuous export of secure score and regulatory compliance data released for general availability (GA)](#continuous-export-of-secure-score-and-regulatory-compliance-data-released-for-general-availability-ga) - [Workflow automations can be triggered by changes to regulatory compliance assessments (GA)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-ga) - [Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps](#assessments-api-field-firstevaluationdate-and-statuschangedate-now-available-in-workspace-schemas-and-logic-apps)
Updates in July include:
### Azure Sentinel connector now includes optional bi-directional alert synchronization (in preview)
-Security Center natively integrates with [Azure Sentinel](../sentinel/index.yml), Azure's cloud-native SIEM and SOAR solution.
+Security Center natively integrates with [Azure Sentinel](../sentinel/index.yml), Azure's cloud-native SIEM and SOAR solution.
Azure Sentinel includes built-in connectors for Azure Security Center at the subscription and tenant levels. Learn more in [Stream alerts to Azure Sentinel](export-to-siem.md#stream-alerts-to-microsoft-sentinel).
These are the alerts that were part of Azure Defender for Resource Manager, and
Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for Servers](defender-for-servers-introduction.md) plans. - ### Enhancements to recommendation to enable Azure Disk Encryption (ADE) Following user feedback, we've renamed the recommendation **Disk encryption should be applied on virtual machines**.
The description has also been updated to better explain the purpose of this hard
| Recommendation | Description | Severity | |--|--|:--:|
-| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
--
+| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see <https://aka.ms/diskencryptioncomparison>.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
### Continuous export of secure score and regulatory compliance data released for general availability (GA)
We've enhanced and expanded this feature over time:
- In December 2020, we added the **preview** option to stream changes to your **regulatory compliance assessment data**.<br/>For full details, see [Continuous export gets new data types (preview)](release-notes-archive.md#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies).
-With this update, these two options are released for general availability (GA).
-
+With this update, these two options are released for general availability (GA).
### Workflow automations can be triggered by changes to regulatory compliance assessments (GA)
Those fields were accessible through the REST API, Azure Resource Graph, continu
With this change, we're making the information available in the Log Analytics workspace schema and from logic apps. - ### 'Compliance over time' workbook template added to Azure Monitor Workbooks gallery In March, we announced the integrated Azure Monitor Workbooks experience in Security Center (see [Azure Monitor Workbooks integrated into Security Center and three templates provided](release-notes-archive.md#azure-monitor-workbooks-integrated-into-security-center-and-three-templates-provided)). The initial release included three templates to build dynamic and visual reports about your organization's security posture.
-We've now added a workbook dedicated to tracking a subscription's compliance with the regulatory or industry standards applied to it.
+We've now added a workbook dedicated to tracking a subscription's compliance with the regulatory or industry standards applied to it.
Learn about using these reports or building your own in [Create rich, interactive reports of Security Center data](custom-dashboards-azure-workbooks.md).
Updates in June include:
- [Prefix for Kubernetes alerts changed from "AKS_" to "K8S_"](#prefix-for-kubernetes-alerts-changed-from-aks_-to-k8s_) - [Deprecated two recommendations from "Apply system updates" security control](#deprecated-two-recommendations-from-apply-system-updates-security-control) - ### New alert for Azure Defender for Key Vault To expand the threat protections provided by Azure Defender for Key Vault, we've added the following alert:
To expand the threat protections provided by Azure Defender for Key Vault, we've
|||:--:|-| | Access from a suspicious IP address to a key vault<br>(KV_SuspiciousIPAccess) | A key vault has been successfully accessed by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. This may indicate that your infrastructure has been compromised. We recommend further investigation. Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Credential Access | Medium | - For more information, see:+ - [Introduction to Azure Defender for Key Vault](defender-for-resource-manager-introduction.md) - [Respond to Azure Defender for Key Vault alerts](defender-for-key-vault-usage.md) - [List of alerts provided by Azure Defender for Key Vault](alerts-reference.md#alerts-azurekv) - ### Recommendations to encrypt with customer-managed keys (CMKs) disabled by default Security Center includes multiple recommendations to encrypt data at rest with customer-managed keys, such as:
This change is reflected in the names of the recommendation with a new prefix, *
:::image type="content" source="media/upcoming-changes/customer-managed-keys-disabled.png" alt-text="Security Center's CMK recommendations will be disabled by default." lightbox="media/upcoming-changes/customer-managed-keys-disabled.png"::: - ### Prefix for Kubernetes alerts changed from "AKS_" to "K8S_" Azure Defender for Kubernetes recently expanded to protect Kubernetes clusters hosted on-premises and in multicloud environments. Learn more in [Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)](release-notes-archive.md#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multicloud-kubernetes-deployments-in-preview).
To reflect the fact that the security alerts provided by Azure Defender for Kube
|-|-| |Kubernetes penetration testing tool detected<br>(**AKS**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **AKS** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. - was changed to: |Alert (alert type)|Description| |-|-| |Kubernetes penetration testing tool detected<br>(**K8S**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **Kubernetes** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes.| - Any suppression rules that refer to alerts beginning "AKS_" were automatically converted. If you've setup SIEM exports, or custom automation scripts that refer to Kubernetes alerts by alert type, you'll need to update them with the new alert types. For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-k8scluster).
The following two recommendations were deprecated:
- **OS version should be updated for your cloud service roles** - By default, Azure periodically updates your guest OS to the latest supported image within the OS family that you've specified in your service configuration (.cscfg), such as Windows Server 2016. - **Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version** - This recommendation's evaluations aren't as wide-ranging as we'd like them to be. We plan to replace the recommendation with an enhanced version that's better aligned with your security needs. - ## May 2021 Updates in May include:
Updates in May include:
- [Assessments API expanded with two new fields](#assessments-api-expanded-with-two-new-fields) - [Asset inventory gets a cloud environment filter](#asset-inventory-gets-a-cloud-environment-filter) - ### Azure Defender for DNS and Azure Defender for Resource Manager released for general availability (GA) These two cloud-native breadth threat protection plans are now GA.
These two cloud-native breadth threat protection plans are now GA.
These new protections greatly enhance your resiliency against attacks from threat actors, and significantly increase the number of Azure resources protected by Azure Defender. - **Azure Defender for Resource Manager** - automatically monitors all resource management operations performed in your organization. For more information, see:
- - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)
- - [Respond to Azure Defender for Resource Manager alerts](defender-for-resource-manager-usage.md)
- - [List of alerts provided by Azure Defender for Resource Manager](alerts-reference.md#alerts-resourcemanager)
+ - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)
+ - [Respond to Azure Defender for Resource Manager alerts](defender-for-resource-manager-usage.md)
+ - [List of alerts provided by Azure Defender for Resource Manager](alerts-reference.md#alerts-resourcemanager)
- **Azure Defender for DNS** - continuously monitors all DNS queries from your Azure resources. For more information, see:
- - [Introduction to Azure Defender for DNS](defender-for-dns-introduction.md)
- - [Respond to Azure Defender for DNS alerts](defender-for-dns-usage.md)
- - [List of alerts provided by Azure Defender for DNS](alerts-reference.md#alerts-dns)
+ - [Introduction to Azure Defender for DNS](defender-for-dns-introduction.md)
+ - [Respond to Azure Defender for DNS alerts](defender-for-dns-usage.md)
+ - [List of alerts provided by Azure Defender for DNS](alerts-reference.md#alerts-dns)
To simplify the process of enabling these plans, use the recommendations:
To simplify the process of enabling these plans, use the recommendations:
- **Azure Defender for DNS should be enabled** > [!NOTE]
-> Enabling Azure Defender plans results in charges. Learn about the pricing details per region on Security Center's pricing page: https://aka.ms/pricing-security-center.
-
+> Enabling Azure Defender plans results in charges. Learn about the pricing details per region on Security Center's [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Azure Defender for open-source relational databases released for general availability (GA)
To expand the threat protections provided by Azure Defender for Resource Manager
|**Azure Resource Manager operation from suspicious IP address (Preview)**<br>(ARM_OperationFromSuspiciousIP)|Azure Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds.|Execution|Medium| |**Azure Resource Manager operation from suspicious proxy IP address (Preview)**<br>(ARM_OperationFromSuspiciousProxyIP)|Azure Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP.|Defense Evasion|Medium| - For more information, see:+ - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) - [Respond to Azure Defender for Resource Manager alerts](defender-for-resource-manager-usage.md) - [List of alerts provided by Azure Defender for Resource Manager](alerts-reference.md#alerts-resourcemanager) - ### CI/CD vulnerability scanning of container images with GitHub workflows and Azure Defender (preview) Azure Defender for container registries now provides DevSecOps teams observability into GitHub Actions workflows.
Azure offers trusted launch as a seamless way to improve the security of [genera
> [!IMPORTANT] > Trusted launch requires the creation of new virtual machines. You can't enable trusted launch on existing virtual machines that were initially created without it.
->
+>
> Trusted launch is currently in public preview. The preview is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. Security Center's recommendation, **vTPM should be enabled on supported virtual machines**, ensures your Azure VMs are using a vTPM. This virtualized version of a hardware Trusted Platform Module enables attestation by measuring the entire boot chain of your VM (UEFI, OS, system, and drivers).
With the vTPM enabled, the **Guest Attestation extension** can remotely validate
- **Guest Attestation extension should be installed on supported Linux virtual machines** - **Guest Attestation extension should be installed on supported Linux virtual machine scale sets**
-Learn more in [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+Learn more in [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
### New recommendations for hardening Kubernetes clusters (in preview)
To access this information, you can use any of the methods in the table below.
| Tool | Details | |-||
-| REST API call | GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates |
+| REST API call | GET <https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates> |
| Azure Resource Graph | `securityresources`<br>`where type == "microsoft.security/assessments"` | | Continuous export | The two dedicated fields will be available the Log Analytics workspace data | | [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files | -- Learn more about the [Assessments REST API](/rest/api/securitycenter/assessments). - ### Asset inventory gets a cloud environment filter Security Center's asset inventory page offers many filters to quickly refine the list of resources displayed. Learn more in [Explore and manage your resources with asset inventory](asset-inventory.md).
Learn more about the multicloud capabilities:
- [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md) - [Connect your GCP projects to Azure Security Center](quickstart-onboard-gcp.md) - ## April 2021 Updates in April include:+ - [Refreshed resource health page (in preview)](#refreshed-resource-health-page-in-preview) - [Container registry images that have been recently pulled are now rescanned weekly (released for general availability (GA))](#container-registry-images-that-have-been-recently-pulled-are-now-rescanned-weekly-released-for-general-availability-ga) - [Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)](#use-azure-defender-for-kubernetes-to-protect-hybrid-and-multicloud-kubernetes-deployments-in-preview)
Updates in April include:
### Refreshed resource health page (in preview)
-Security Center's resource health has been expanded, enhanced, and improved to provide a snapshot view of the overall health of a single resource.
+Security Center's resource health has been expanded, enhanced, and improved to provide a snapshot view of the overall health of a single resource.
You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using [the advanced protection plans of Microsoft Defender](defender-for-cloud-introduction.md), you can see outstanding security alerts for that specific resource too.
This preview page in Security Center's portal pages shows:
Learn more in [Tutorial: Investigate the health of your resources](investigate-resource-health.md). - ### Container registry images that have been recently pulled are now rescanned weekly (released for general availability (GA)) Azure Defender for container registries includes a built-in vulnerability scanner. This scanner immediately scans any image you push to your registry and any image pulled within the last 30 days.
Scanning is charged on a per image basis, so there's no additional charge for th
Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md). - ### Use Azure Defender for Kubernetes to protect hybrid and multicloud Kubernetes deployments (in preview)
-Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new [extensions capabilities](../azure-arc/kubernetes/extensions.md).
+Azure Defender for Kubernetes is expanding its threat protection capabilities to defend your clusters wherever they're deployed. This has been enabled by integrating with [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and its new [extensions capabilities](../azure-arc/kubernetes/extensions.md).
When you've enabled Azure Arc on your non-Azure Kubernetes clusters, a new recommendation from Azure Security Center offers to deploy the Azure Defender extension to them with only a few clicks.
Learn more in [Use Azure Defender for Kubernetes with your on-premises and multi
:::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Azure Security Center's recommendation for deploying the Azure Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png"::: - ### Microsoft Defender for Endpoint integration with Azure Defender now supports Windows Server 2019 and Windows 10 on Windows Virtual Desktop released for general availability (GA) Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security solution. It provides risk-based vulnerability management and assessment as well as endpoint detection and response (EDR). For a full list of the benefits of using Defender for Endpoint together with Azure Security Center, see [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
+When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
Support has now been expanded to include Windows Server 2019 and Windows 10 on [Windows Virtual Desktop](../virtual-desktop/overview.md). > [!NOTE] > If you're enabling Defender for Endpoint on a Windows Server 2019 server, ensure it meets the prerequisites described in [Enable the Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration). - ### Recommendations to enable Azure Defender for DNS and Resource Manager (in preview) Two new recommendations have been added to simplify the process of enabling [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for DNS](defender-for-dns-introduction.md):
Two new recommendations have been added to simplify the process of enabling [Azu
- **Azure Defender for Resource Manager should be enabled** - Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. - **Azure Defender for DNS should be enabled** - Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer.
-Enabling Azure Defender plans results in charges. Learn about the pricing details per region on Security Center's pricing page: https://aka.ms/pricing-security-center.
+Enabling Azure Defender plans results in charges. Learn about the pricing details per region on Security Center's [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
> [!TIP] > Preview recommendations don't render a resource unhealthy, and they aren't included in the calculations of your secure score. Remediate them wherever possible, so that when the preview period ends they'll contribute towards your score. Learn more about how to respond to these recommendations in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md). - ### Three regulatory compliance standards added: Azure CIS 1.3.0, CMMC Level 3, and New Zealand ISM Restricted We've added three standards for use with Azure Security Center. Using the regulatory compliance dashboard, you can now track your compliance with:
You can assign these to your subscriptions as described in [Customize the set of
:::image type="content" source="media/release-notes/additional-regulatory-compliance-standards.png" alt-text="Three standards added for use with Azure Security Center's regulatory compliance dashboard." lightbox="media/release-notes/additional-regulatory-compliance-standards.png"::: Learn more in:+ - [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md) - [Tutorial: Improve your regulatory compliance](regulatory-compliance-dashboard.md) - [FAQ - Regulatory compliance dashboard](regulatory-compliance-dashboard.md#faqregulatory-compliance-dashboard)
Azure's [Guest Configuration extension](../governance/policy/concepts/guest-conf
We've added four new recommendations to Security Center to make the most of this extension. - Two recommendations prompt you to install the extension and its required system-managed identity:
- - **Guest Configuration extension should be installed on your machines**
- - **Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity**
+ - **Guest Configuration extension should be installed on your machines**
+ - **Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity**
- When the extension is installed and running, it will begin auditing your machines and you'll be prompted to harden settings such as configuration of the operating system and environment settings. These two recommendations will prompt you to harden your Windows and Linux machines as described:
- - **Windows Defender Exploit Guard should be enabled on your machines**
- - **Authentication to Linux machines should require SSH keys**
+ - **Windows Defender Exploit Guard should be enabled on your machines**
+ - **Authentication to Linux machines should require SSH keys**
Learn more in [Understand Azure Policy's Guest Configuration](../governance/policy/concepts/guest-configuration.md).
The recommendations listed below are being moved to the **Implement security bes
Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations). - ### 11 Azure Defender alerts deprecated The 11 Azure Defender alerts listed below have been deprecated.
The 11 Azure Defender alerts listed below have been deprecated.
| ARM_MicroBurstDomainInfo | PREVIEW - MicroBurst toolkit "Get-AzureDomainInfo" function run detected | | ARM_MicroBurstRunbook | PREVIEW - MicroBurst toolkit "Get-AzurePasswords" function run detected | - - These nine alerts relate to an Azure Active Directory Identity Protection connector (IPC) that has already been deprecated: | AlertType | AlertDisplayName |
The 11 Azure Defender alerts listed below have been deprecated.
| LeakedCredentials | Azure AD threat intelligence | | AADAI | Azure AD AI |
-
> [!TIP] > These nine IPC alerts were never Security Center alerts. TheyΓÇÖre part of the Azure Active Directory (AAD) Identity Protection connector (IPC) that was sending them to Security Center. For the last two years, the only customers whoΓÇÖve been seeing those alerts are organizations who configured the export (from the connector to ASC) in 2019 or earlier. AAD IPC has continued to show them in its own alerts systems and theyΓÇÖve continued to be available in Azure Sentinel. The only change is that theyΓÇÖre no longer appearing in Security Center.
-### Two recommendations from "Apply system updates" security control were deprecated
+### Two recommendations from "Apply system updates" security control were deprecated
The following two recommendations were deprecated and the changes might result in a slight impact on your secure score:
Learn more about these recommendations in the [security recommendations referenc
The Azure Defender dashboard's coverage area includes tiles for the relevant Azure Defender plans for your environment. Due to an issue with the reporting of the numbers of protected and unprotected resources, we've decided to temporarily remove the resource coverage status for **Azure Defender for SQL on machines** until the issue is resolved. -
-### 21 recommendations moved between security controls
+### 21 recommendations moved between security controls
The following recommendations were moved to different security controls. Security controls are logical groups of related security recommendations, and reflect your vulnerable attack surfaces. This move ensures that each of these recommendations is in the most appropriate control to meet its objective.
Learn which recommendations are in each security control in [Security controls a
|Vulnerability assessment should be enabled on your SQL servers<br>Vulnerability assessment should be enabled on your SQL managed instances<br>Vulnerabilities on your SQL databases should be remediated new<br>Vulnerabilities on your SQL databases in VMs should be remediated |Moving from Remediate vulnerabilities (worth 6 points)<br>to Remediate security configurations (worth 4 points).<br>Depending on your environment, these recommendations will have a reduced impact on your score.| |There should be more than one owner assigned to your subscription<br>Automation account variables should be encrypted<br>IoT Devices - Auditd process stopped sending events<br>IoT Devices - Operating system baseline validation failure<br>IoT Devices - TLS cipher suite upgrade needed<br>IoT Devices - Open Ports On Device<br>IoT Devices - Permissive firewall policy in one of the chains was found<br>IoT Devices - Permissive firewall rule in the input chain was found<br>IoT Devices - Permissive firewall rule in the output chain was found<br>Diagnostic logs in IoT Hub should be enabled<br>IoT Devices - Agent sending underutilized messages<br>IoT Devices - Default IP Filter Policy should be Deny<br>IoT Devices - IP Filter rule large IP range<br>IoT Devices - Agent message intervals and size should be adjusted<br>IoT Devices - Identical Authentication Credentials<br>IoT Devices - Audited process stopped sending events<br>IoT Devices - Operating system (OS) baseline configuration should be fixed|Moving to **Implement security best practices**.<br>When a recommendation moves to the Implement security best practices security control, which is worth no points, the recommendation no longer affects your secure score.| -- ## March 2021 Updates in March include:
Updates in March include:
- [Two legacy recommendations no longer write data directly to Azure activity log](#two-legacy-recommendations-no-longer-write-data-directly-to-azure-activity-log) - [Recommendations page enhancements](#recommendations-page-enhancements) - ### Azure Firewall management integrated into Security Center
-When you open Azure Security Center, the first page to appear is the overview page.
+When you open Azure Security Center, the first page to appear is the overview page.
This interactive dashboard provides a unified view into the security posture of your hybrid cloud workloads. Additionally, it shows security alerts, coverage information, and more.
Learn more about this dashboard in [Azure Security Center's overview page](overv
:::image type="content" source="media/release-notes/overview-dashboard-firewall-manager.png" alt-text="Security Center's overview dashboard with a tile for Azure Firewall"::: - ### SQL vulnerability assessment now includes the "Disable rule" experience (preview) Security Center includes a built-in vulnerability scanner to help you discover, track, and remediate potential database vulnerabilities. The results from your assessment scans provide an overview of your SQL machines' security state, and details of any security findings.
If you have an organizational need to ignore a finding, rather than remediate it
Learn more in [Disable specific findings](defender-for-sql-on-machines-vulnerability-assessment.md#disable-specific-findings). -- ### Azure Monitor Workbooks integrated into Security Center and three templates provided As part of Ignite Spring 2021, we announced an integrated Azure Monitor Workbooks experience in Security Center.
Learn about using these reports or building your own in [Create rich, interactiv
:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-snip.png" alt-text="Secure score over time report."::: - ### Regulatory compliance dashboard now includes Azure Audit reports (preview)
-From the regulatory compliance dashboard's toolbar, you can now download Azure and Dynamics certification reports.
+From the regulatory compliance dashboard's toolbar, you can now download Azure and Dynamics certification reports.
:::image type="content" source="media/release-notes/audit-reports-regulatory-compliance-dashboard.png" alt-text="Regulatory compliance dashboard's toolbar":::
Learn more about [Managing the standards in your regulatory compliance dashboard
:::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard.png" alt-text="Filtering the list of available Azure Audit reports."::: -- ### Recommendation data can be viewed in Azure Resource Graph with "Explore in ARG" The recommendation details pages now include the "Explore in ARG" toolbar button. Use this button to open an Azure Resource Graph query and explore, export, and share the recommendation's data.
Learn more about [Azure Resource Graph (ARG)](../governance/resource-graph/index
:::image type="content" source="media/release-notes/explore-in-resource-graph.png" alt-text="Explore recommendation data in Azure Resource Graph."::: - ### Updates to the policies for deploying workflow automation Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
We provide three Azure Policy 'DeployIfNotExist' policies that create and config
|Workflow automation for security recommendations|[Deploy Workflow Automation for Azure Security Center recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Azure Security Center regulatory compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c| - There are two updates to the features of these policies: - When assigned, they will remain enabled by enforcement.
Get started with [workflow automation templates](https://github.com/Azure/Azure-
Learn more about how to [Automate responses to Security Center triggers](workflow-automation.md). -
-### Two legacy recommendations no longer write data directly to Azure activity log
+### Two legacy recommendations no longer write data directly to Azure activity log
Security Center passes the data for almost all security recommendations to Azure Advisor, which in turn, writes it to [Azure activity log](../azure-monitor/essentials/activity-log.md). For two recommendations, the data is simultaneously written directly to Azure activity log. With this change, Security Center stops writing data for these legacy security recommendations directly to activity Log. Instead, we're exporting the data to Azure Advisor as we do for all the other recommendations. The two legacy recommendations are:+ - Endpoint protection health issues should be resolved on your machines - Vulnerabilities in security configuration on your machines should be remediated If you've been accessing information for these two recommendations in activity log's "Recommendation of type TaskDiscovery" category, this is no longer available. -
-### Recommendations page enhancements
+### Recommendations page enhancements
We've released an improved version of the recommendations list to present more information at a glance.
Updates in February include:
- [Workflow automations can be triggered by changes to regulatory compliance assessments (in preview)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-in-preview) - [Asset inventory page enhancements](#asset-inventory-page-enhancements) - ### New security alerts page in the Azure portal released for general availability (GA) Azure Security Center's security alerts page has been redesigned to provide:
Azure Security Center's security alerts page has been redesigned to provide:
:::image type="content" source="media/managing-and-responding-alerts/alerts-page.png" alt-text="Azure Security Center's security alerts list"::: - ### Kubernetes workload protection recommendations released for general availability (GA) We're happy to announce the general availability (GA) of the set of recommendations for Kubernetes workload protections.
Learn more in [Workload protection best-practices using Kubernetes admission con
> [!NOTE] > While the recommendations were in preview, they didn't render an AKS cluster resource unhealthy, and they weren't included in the calculations of your secure score. with this GA announcement these will be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score. Remediate them wherever possible as described in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md). - ### Microsoft Defender for Endpoint integration with Azure Defender now supports Windows Server 2019 and Windows 10 on Windows Virtual Desktop (in preview) Microsoft Defender for Endpoint is a holistic, cloud delivered endpoint security solution. It provides risk-based vulnerability management and assessment as well as endpoint detection and response (EDR). For a full list of the benefits of using Defender for Endpoint together with Azure Security Center, see [Protect your endpoints with Security Center's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
-When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
+When you enable Azure Defender for Servers running Windows Server, a license for Defender for Endpoint is included with the plan. If you've already enabled Azure Defender for Servers and you have Windows Server 2019 servers in your subscription, they'll automatically receive Defender for Endpoint with this update. No manual action is required.
Support has now been expanded to include Windows Server 2019 and Windows 10 on [Windows Virtual Desktop](../virtual-desktop/overview.md).
When you're reviewing the details of a recommendation, it's often helpful to be
:::image type="content" source="media/release-notes/view-policy-definition.png" alt-text="Link to Azure Policy page for the specific policy supporting a recommendation.":::
-Use this link to view the policy definition and review the evaluation logic.
+Use this link to view the policy definition and review the evaluation logic.
If you're reviewing the list of recommendations on our [Security recommendations reference guide](recommendations-reference.md), you'll also see links to the policy definition pages: :::image type="content" source="media/release-notes/view-policy-definition-from-documentation.png" alt-text="Accessing the Azure Policy page for a specific policy directly from the Azure Security Center recommendations reference page." lightbox="media/release-notes/view-policy-definition-from-documentation.png"::: - ### SQL data classification recommendation no longer affects your secure score+ The recommendation **Sensitive data in your SQL databases should be classified** no longer affects your secure score. This is the only recommendation in the **Apply data classification** security control, so that control now has a secure score value of 0. For a full list of all security controls in Security Center, together with their scores and a list of the recommendations in each, see [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations). - ### Workflow automations can be triggered by changes to regulatory compliance assessments (in preview)+ We've added a third data type to the trigger options for your workflow automations: changes to regulatory compliance assessments. Learn how to use the workflow automation tools in [Automate responses to Security Center triggers](workflow-automation.md). :::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Using changes to regulatory compliance assessments to trigger a workflow automation." lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png"::: - ### Asset inventory page enhancements+ Security Center's asset inventory page has been improved in the following ways: - Summaries at the top of the page now include **Unregistered subscriptions**, showing the number of subscriptions without Security Center enabled.
Security Center's asset inventory page has been improved in the following ways:
:::image type="content" source="media/release-notes/unregistered-subscriptions.png" alt-text="Count of unregistered subscriptions in the summaries at the top of the asset inventory page."::: - Filters have been expanded and enhanced to include:
- - **Counts** - Each filter presents the number of resources that meet the criteria of each category
+ - **Counts** - Each filter presents the number of resources that meet the criteria of each category
- :::image type="content" source="media/release-notes/counts-in-inventory-filters.png" alt-text="Counts in the filters in the asset inventory page of Azure Security Center.":::
+ :::image type="content" source="media/release-notes/counts-in-inventory-filters.png" alt-text="Counts in the filters in the asset inventory page of Azure Security Center.":::
- - **Contains exemptions filter** (Optional) - narrow the results to resources that have/haven't got exemptions. This filter isn't shown by default, but is accessible from the **Add filter** button.
+ - **Contains exemptions filter** (Optional) - narrow the results to resources that have/haven't got exemptions. This filter isn't shown by default, but is accessible from the **Add filter** button.
- :::image type="content" source="media/release-notes/adding-contains-exemption-filter.gif" alt-text="Adding the filter 'contains exemption' in Azure Security Center's asset inventory page":::
+ :::image type="content" source="media/release-notes/adding-contains-exemption-filter.gif" alt-text="Adding the filter 'contains exemption' in Azure Security Center's asset inventory page":::
Learn more about how to [Explore and manage your resources with asset inventory](asset-inventory.md). -- ## January 2021 Updates in January include:
Updates in January include:
- ["Not applicable" resources now reported as "Compliant" in Azure Policy assessments](#not-applicable-resources-now-reported-as-compliant-in-azure-policy-assessments) - [Export weekly snapshots of secure score and regulatory compliance data with continuous export (preview)](#export-weekly-snapshots-of-secure-score-and-regulatory-compliance-data-with-continuous-export-preview) - ### Azure Security Benchmark is now the default policy initiative for Azure Security Center Azure Security Benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. In recent months, Security Center's list of built-in security recommendations has grown significantly to expand our coverage of this benchmark.
-From this release, the benchmark is the foundation for Security CenterΓÇÖs recommendations and fully integrated as the default policy initiative.
+From this release, the benchmark is the foundation for Security CenterΓÇÖs recommendations and fully integrated as the default policy initiative.
All Azure services have a security baseline page in their documentation. These baselines are built on Azure Security Benchmark.
If you're using Security Center's regulatory compliance dashboard, you'll see tw
:::image type="content" source="media/release-notes/regulatory-compliance-with-azure-security-benchmark.png" alt-text="Azure Security Center's regulatory compliance dashboard showing the Azure Security Benchmark":::
-Existing recommendations are unaffected and as the benchmark grows, changes will automatically be reflected within Security Center.
+Existing recommendations are unaffected and as the benchmark grows, changes will automatically be reflected within Security Center.
To learn more, see the following pages:
Main capabilities:
[Learn more about Azure Arc-enabled servers](../azure-arc/servers/index.yml). - ### Secure score for management groups is now available in preview The secure score page now shows the aggregated secure scores for your management groups in addition to the subscription level. So now you can see the list of management groups in your organization and the score for each management group.
Learn about external tools made possible with the secure score API in [the secur
Learn more about [secure score and security controls in Azure Security Center](secure-score-security-controls.md). - ### Dangling DNS protections added to Azure Defender for App Service
-Subdomain takeovers are a common, high-severity threat for organizations. A subdomain takeover can occur when you have a DNS record that points to a deprovisioned web site. Such DNS records are also known as "dangling DNS" entries. CNAME records are especially vulnerable to this threat.
+Subdomain takeovers are a common, high-severity threat for organizations. A subdomain takeover can occur when you have a DNS record that points to a deprovisioned web site. Such DNS records are also known as "dangling DNS" entries. CNAME records are especially vulnerable to this threat.
Subdomain takeovers enable threat actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity.
Learn more:
- [Prevent dangling DNS entries and avoid subdomain takeover](../security/fundamentals/subdomain-takeover.md) - Learn about the threat of subdomain takeover and the dangling DNS aspect - [Introduction to Azure Defender for App Service](defender-for-app-service-introduction.md) - ### Multicloud connectors are released for general availability (GA) With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
From Defender for Cloud's menu, select **Multicloud connectors** and you'll see
:::image type="content" source="./media/quickstart-onboard-aws/add-aws-account.png" alt-text="Add AWS account button on Security Center's multicloud connectors page"::: Learn more in:+ - [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md) - [Connect your GCP projects to Azure Security Center](quickstart-onboard-gcp.md) - ### Exempt entire recommendations from your secure score for subscriptions and management groups We're expanding the exemption capability to include entire recommendations. Providing further options to fine-tune the security recommendations that Security Center makes for your subscriptions, management group, or resources.
With this preview feature, you can now create an exemption for a recommendation
Learn more in [Exempting resources and recommendations from your secure score](exempt-resource.md). -- ### Users can now request tenant-wide visibility from their global administrator If a user doesn't have permissions to see Security Center data, they'll now see a link to request permissions from their organization's global administrator. The request includes the role they'd like and the justification for why it's necessary.
If a user doesn't have permissions to see Security Center data, they'll now see
Learn more in [Request tenant-wide permissions when yours are insufficient](tenant-wide-permissions-management.md#request-tenant-wide-permissions-when-yours-are-insufficient). - ### 35 preview recommendations added to increase coverage of Azure Security Benchmark
-[Azure Security Benchmark](/security/benchmark/azure/introduction) is the default policy initiative in Azure Security Center.
+[Azure Security Benchmark](/security/benchmark/azure/introduction) is the default policy initiative in Azure Security Center.
To increase the coverage of this benchmark, the following 35 preview recommendations have been added to Security Center.
To increase the coverage of this benchmark, the following 35 preview recommendat
| Protect applications against DDoS attacks | - Web Application Firewall (WAF) should be enabled for Application Gateway<br> - Web Application Firewall (WAF) should be enabled for Azure Front Door Service service | | Restrict unauthorized network access | - Firewall should be enabled on Key Vault<br> - Private endpoint should be configured for Key Vault<br> - App Configuration should use private link<br> - Azure Cache for Redis should reside within a virtual network<br> - Azure Event Grid domains should use private link<br> - Azure Event Grid topics should use private link<br> - Azure Machine Learning workspaces should use private link<br> - Azure SignalR Service should use private link<br> - Azure Spring Cloud should use network injection<br> - Container registries should not allow unrestricted network access<br> - Container registries should use private link<br> - Public network access should be disabled for MariaDB servers<br> - Public network access should be disabled for MySQL servers<br> - Public network access should be disabled for PostgreSQL servers<br> - Storage account should use a private link connection<br> - Storage accounts should restrict network access using virtual network rules<br> - VM Image Builder templates should use private link| - Related links: - [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction)
Related links:
- [Learn more about Azure Database for MySQL](../mysql/overview.md) - [Learn more about Azure Database for PostgreSQL](../postgresql/overview.md)
+### CSV export of filtered list of recommendations
+In November 2020, we added filters to the recommendations page ([Recommendations list now includes filters](release-notes-archive.md#recommendations-list-now-includes-filters)). In December, we expanded those filters ([Recommendations page has new filters for environment, severity, and available responses](release-notes-archive.md#recommendations-page-has-new-filters-for-environment-severity-and-available-responses)).
+With this announcement, we're changing the behavior of the **Download to CSV** button so that the CSV export only includes the recommendations currently displayed in the filtered list.
-### CSV export of filtered list of recommendations
-
-In November 2020, we added filters to the recommendations page ([Recommendations list now includes filters](release-notes-archive.md#recommendations-list-now-includes-filters)). In December, we expanded those filters ([Recommendations page has new filters for environment, severity, and available responses](release-notes-archive.md#recommendations-page-has-new-filters-for-environment-severity-and-available-responses)).
-
-With this announcement, we're changing the behavior of the **Download to CSV** button so that the CSV export only includes the recommendations currently displayed in the filtered list.
-
-For example, in the image below you can see that the list has been filtered to two recommendations. The CSV file that is generated includes the status details for every resource affected by those two recommendations.
+For example, in the image below you can see that the list has been filtered to two recommendations. The CSV file that is generated includes the status details for every resource affected by those two recommendations.
:::image type="content" source="media/managing-and-responding-alerts/export-to-csv-with-filters.png" alt-text="Exporting filtered recommendations to a CSV file."::: Learn more in [Security recommendations in Azure Security Center](review-security-recommendations.md). - ### "Not applicable" resources now reported as "Compliant" in Azure Policy assessments Previously, resources that were evaluated for a recommendation and found to be **not applicable** appeared in Azure Policy as "Non-compliant". No user actions could change their state to "Compliant". With this change, they're reported as "Compliant" for improved clarity. The only impact will be seen in Azure Policy where the number of compliant resources will increase. There will be no impact to your secure score in Azure Security Center. - ### Export weekly snapshots of secure score and regulatory compliance data with continuous export (preview) We've added a new preview feature to the [continuous export](continuous-export.md) tools for exporting weekly snapshots of secure score and regulatory compliance data.
Updates in December include:
- [Recommendations page has new filters for environment, severity, and available responses](#recommendations-page-has-new-filters-for-environment-severity-and-available-responses) - [Continuous export gets new data types and improved deployifnotexist policies](#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies) - ### Azure Defender for SQL servers on machines is generally available Azure Security Center offers two Azure Defender plans for SQL Servers: -- **Azure Defender for Azure SQL database servers** - defends your Azure-native SQL Servers
+- **Azure Defender for Azure SQL database servers** - defends your Azure-native SQL Servers
- **Azure Defender for SQL servers on machines** - extends the same protections to your SQL servers in hybrid, multicloud, and on-premises environments With this announcement, **Azure Defender for SQL** now protects your databases and their data wherever they're located.
Azure Defender for SQL includes vulnerability assessment capabilities. The vulne
Learn more about [Azure Defender for SQL](defender-for-sql-introduction.md). - ### Azure Defender for SQL support for Azure Synapse Analytics dedicated SQL pool is generally available Azure Synapse Analytics (formerly SQL DW) is an analytics service that combines enterprise data warehousing and big data analytics. Dedicated SQL pools are the enterprise data warehousing features of Azure Synapse. Learn more in [What is Azure Synapse Analytics (formerly SQL DW)?](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). Azure Defender for SQL protects your dedicated SQL pools with: -- **Advanced threat protection** to detect threats and attacks
+- **Advanced threat protection** to detect threats and attacks
- **Vulnerability assessment capabilities** to identify and remediate security misconfigurations Azure Defender for SQL's support for Azure Synapse Analytics SQL pools is automatically added to Azure SQL databases bundle in Azure Security Center. You'll find a new ΓÇ£Azure Defender for SQLΓÇ¥ tab in your Synapse workspace page in the Azure portal. Learn more about [Azure Defender for SQL](defender-for-sql-introduction.md). - ### Global Administrators can now grant themselves tenant-level permissions
-A user with the Azure Active Directory role of **Global Administrator** might have tenant-wide responsibilities, but lack the Azure permissions to view that organization-wide information in Azure Security Center.
+A user with the Azure Active Directory role of **Global Administrator** might have tenant-wide responsibilities, but lack the Azure permissions to view that organization-wide information in Azure Security Center.
To assign yourself tenant-level permissions, follow the instructions in [Grant tenant-wide permissions to yourself](tenant-wide-permissions-management.md#grant-tenant-wide-permissions-to-yourself). - ### Two new Azure Defender plans: Azure Defender for DNS and Azure Defender for Resource Manager (in preview) We've added two new cloud-native breadth threat protection capabilities for your Azure environment.
We've added two new cloud-native breadth threat protection capabilities for your
These new protections greatly enhance your resiliency against attacks from threat actors, and significantly increase the number of Azure resources protected by Azure Defender. - **Azure Defender for Resource Manager** - automatically monitors all resource management operations performed in your organization. For more information, see:
- - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)
- - [Respond to Azure Defender for Resource Manager alerts](defender-for-resource-manager-usage.md)
- - [List of alerts provided by Azure Defender for Resource Manager](alerts-reference.md#alerts-resourcemanager)
+ - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)
+ - [Respond to Azure Defender for Resource Manager alerts](defender-for-resource-manager-usage.md)
+ - [List of alerts provided by Azure Defender for Resource Manager](alerts-reference.md#alerts-resourcemanager)
- **Azure Defender for DNS** - continuously monitors all DNS queries from your Azure resources. For more information, see:
- - [Introduction to Azure Defender for DNS](defender-for-dns-introduction.md)
- - [Respond to Azure Defender for DNS alerts](defender-for-dns-usage.md)
- - [List of alerts provided by Azure Defender for DNS](alerts-reference.md#alerts-dns)
-
+ - [Introduction to Azure Defender for DNS](defender-for-dns-introduction.md)
+ - [Respond to Azure Defender for DNS alerts](defender-for-dns-usage.md)
+ - [List of alerts provided by Azure Defender for DNS](alerts-reference.md#alerts-dns)
### New security alerts page in the Azure portal (preview)
To access the new experience, use the 'try it now' link from the banner at the t
To create sample alerts from the new alerts experience, see [Generate sample Azure Defender alerts](alert-validation.md#generate-sample-security-alerts). -
-### Revitalized Security Center experience in Azure SQL Database & SQL Managed Instance
+### Revitalized Security Center experience in Azure SQL Database & SQL Managed Instance
The Security Center experience within SQL provides access to the following Security Center and Azure Defender for SQL features: - **Security recommendations** ΓÇô Security Center periodically analyzes the security state of all connected Azure resources to identify potential security misconfigurations. It then provides recommendations on how to remediate those vulnerabilities and improve organizationsΓÇÖ security posture. - **Security alerts** ΓÇô a detection service that continuously monitors Azure SQL activities for threats such as SQL injection, brute-force attacks, and privilege abuse. This service triggers detailed and action-oriented security alerts in Security Center and provides options for continuing investigations with Azure Sentinel, MicrosoftΓÇÖs Azure-native SIEM solution.-- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
+- **Findings** ΓÇô a vulnerability assessment service that continuously monitors Azure SQL configurations and helps remediate vulnerabilities. Assessment scans provide an overview of Azure SQL security states together with detailed security findings.
:::image type="content" source="media/release-notes/microsoft-defender-for-cloud-experience-in-sql.png" alt-text="Azure Security Center's security features for SQL are available from within Azure SQL"::: - ### Asset inventory tools and filters updated The inventory page in Azure Security Center has been refreshed with the following changes: -- **Guides and feedback** added to the toolbar. This opens a pane with links to related information and tools.
+- **Guides and feedback** added to the toolbar. This opens a pane with links to related information and tools.
- **Subscriptions filter** added to the default filters available for your resources. - **Open query** link for opening the current filter options as an Azure Resource Graph query (formerly called "View in resource graph explorer").-- **Operator options** for each filter. Now you can choose from more logical operators other than '='. For example, you might want to find all resources with active recommendations whose titles include the string 'encrypt'.
+- **Operator options** for each filter. Now you can choose from more logical operators other than '='. For example, you might want to find all resources with active recommendations whose titles include the string 'encrypt'.
:::image type="content" source="media/release-notes/inventory-filter-operators.png" alt-text="Controls for the operator option in asset inventory's filters"::: Learn more about inventory in [Explore and manage your resources with asset inventory](asset-inventory.md). - ### Recommendation about web apps requesting SSL certificates no longer part of secure score
-The recommendation "Web apps should request an SSL certificate for all incoming requests" has been moved from the security control **Manage access and permissions** (worth a maximum of 4 pts) into **Implement security best practices** (which is worth no points).
+The recommendation "Web apps should request an SSL certificate for all incoming requests" has been moved from the security control **Manage access and permissions** (worth a maximum of 4 pts) into **Implement security best practices** (which is worth no points).
Ensuring a web app requests a certificate certainly makes it more secure. However, for public-facing web apps it's irrelevant. If you access your site over HTTP and not HTTPS, you will not receive any client certificate. So if your application requires client certificates, you should not allow requests to your application over HTTP. Learn more in [Configure TLS mutual authentication for Azure App Service](../app-service/app-service-web-configure-tls-mutual-auth.md).
-With this change, the recommendation is now a recommended best practice that does not impact your score.
+With this change, the recommendation is now a recommended best practice that does not impact your score.
Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations). - ### Recommendations page has new filters for environment, severity, and available responses Azure Security Center monitors all connected resources and generates security recommendations. Use these recommendations to strengthen your hybrid cloud posture and track compliance with the policies and standards relevant to your organization, industry, and country.
The filters added this month provide options to refine the recommendations list
- **Response actions** - View recommendations according to the availability of Security Center response options: Fix, Deny, and Enforce > [!TIP]
- > The response actions filter replaces the **Quick fix available (Yes/No)** filter.
- >
+ > The response actions filter replaces the **Quick fix available (Yes/No)** filter.
+ >
> Learn more about each of these response options:
+ >
> - [Fix button](implement-security-recommendations.md#fix-button) > - [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)
These tools have been enhanced and expanded in the following ways:
- **Continuous export's deployifnotexist policies enhanced**. The policies now:
- - **Check whether the configuration is enabled.** If it isn't, the policy will show as non-compliant and create a compliant resource. Learn more about the supplied Azure Policy templates in the "Deploy at scale with Azure Policy tab" in [Set up a continuous export](continuous-export.md#set-up-a-continuous-export).
+ - **Check whether the configuration is enabled.** If it isn't, the policy will show as non-compliant and create a compliant resource. Learn more about the supplied Azure Policy templates in the "Deploy at scale with Azure Policy tab" in [Set up a continuous export](continuous-export.md#set-up-a-continuous-export).
- - **Support exporting security findings.** When using the Azure Policy templates, you can configure your continuous export to include findings. This is relevant when exporting recommendations that have 'sub' recommendations, like findings from vulnerability assessment scanners or specific system updates for the 'parent' recommendation "System updates should be installed on your machines".
-
- - **Support exporting secure score data.**
+ - **Support exporting security findings.** When using the Azure Policy templates, you can configure your continuous export to include findings. This is relevant when exporting recommendations that have 'sub' recommendations, like findings from vulnerability assessment scanners or specific system updates for the 'parent' recommendation "System updates should be installed on your machines".
+
+ - **Support exporting secure score data.**
- **Regulatory compliance assessment data added (in preview).** You can now continuously export updates to regulatory compliance assessments, including for any custom initiatives, to a Log Analytics workspace or Event Hubs. This feature is unavailable on national clouds.
Preview recommendations don't render a resource unhealthy, and they aren't inclu
| Enable auditing and logging | - Diagnostic logs in App Services should be enabled | | Implement security best practices | - Azure Backup should be enabled for virtual machines<br>- Geo-redundant backup should be enabled for Azure Database for MariaDB<br>- Geo-redundant backup should be enabled for Azure Database for MySQL<br>- Geo-redundant backup should be enabled for Azure Database for PostgreSQL<br>- PHP should be updated to the latest version for your API app<br>- PHP should be updated to the latest version for your web app<br>- Java should be updated to the latest version for your API app<br>- Java should be updated to the latest version for your function app<br>- Java should be updated to the latest version for your web app<br>- Python should be updated to the latest version for your API app<br>- Python should be updated to the latest version for your function app<br>- Python should be updated to the latest version for your web app<br>- Audit retention for SQL servers should be set to at least 90 days | - Related links: - [Learn more about Azure Security Benchmark](/security/benchmark/azure/introduction)
Related links:
- [Learn more about Azure Database for MySQL](../mysql/overview.md) - [Learn more about Azure Database for PostgreSQL](../postgresql/overview.md) - ### NIST SP 800 171 R2 added to Security Center's regulatory compliance dashboard
-The NIST SP 800-171 R2 standard is now available as a built-in initiative for use with Azure Security Center's regulatory compliance dashboard. The mappings for the controls are described in [Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative](../governance/policy/samples/nist-sp-800-171-r2.md).
+The NIST SP 800-171 R2 standard is now available as a built-in initiative for use with Azure Security Center's regulatory compliance dashboard. The mappings for the controls are described in [Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative](../governance/policy/samples/nist-sp-800-171-r2.md).
To apply the standard to your subscriptions and continuously monitor your compliance status, use the instructions in [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
To apply the standard to your subscriptions and continuously monitor your compli
For more information about this compliance standard, see [NIST SP 800-171 R2](https://csrc.nist.gov/publications/detail/sp/800-171/rev-2/final). - ### Recommendations list now includes filters You can now filter the list of security recommendations according to a range of criteria. In the following example, the recommendations list has been filtered to show recommendations that:
You can now filter the list of security recommendations according to a range of
:::image type="content" source="media/release-notes/recommendations-filters.png" alt-text="Filters for the recommendations list."::: - ### Auto provisioning experience improved and expanded
-The auto provisioning feature helps reduce management overhead by installing the required extensions on new - and existing - Azure VMs so they can benefit from Security Center's protections.
+The auto provisioning feature helps reduce management overhead by installing the required extensions on new - and existing - Azure VMs so they can benefit from Security Center's protections.
As Azure Security Center grows, more extensions have been developed and Security Center can monitor a larger list of resource types. The auto provisioning tools have now been expanded to support other extensions and resource types by leveraging the capabilities of Azure Policy.
You can now configure the auto provisioning of:
Learn more in [Auto provisioning agents and extensions from Azure Security Center](enable-data-collection.md). - ### Secure score is now available in continuous export (preview) With continuous export of secure score, you can stream changes to your score in real-time to Azure Event Hubs or a Log Analytics workspace. Use this capability to:
With continuous export of secure score, you can stream changes to your score in
Learn more about how to [Continuously export Security Center data](continuous-export.md). - ### "System updates should be installed on your machines" recommendation now includes subrecommendations The **System updates should be installed on your machines** recommendation has been enhanced. The new version includes subrecommendations for each missing update and brings the following improvements:
The **System updates should be installed on your machines** recommendation has b
:::image type="content" source="./media/upcoming-changes/system-updates-should-be-installed-subassessment.png" alt-text="Opening one of the subrecommendations in the portal experience for the updated recommendation."::: -- Enriched data for the recommendation from Azure Resource Graph (ARG). ARG is an Azure service that's designed to provide efficient resource exploration. You can use ARG to query at scale across a given set of subscriptions so that you can effectively govern your environment.
+- Enriched data for the recommendation from Azure Resource Graph (ARG). ARG is an Azure service that's designed to provide efficient resource exploration. You can use ARG to query at scale across a given set of subscriptions so that you can effectively govern your environment.
For Azure Security Center, you can use ARG and the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query a wide range of security posture data.
You can now see whether or not your subscriptions have the default Security Cent
:::image type="content" source="media/release-notes/policy-assignment-info-per-subscription.png" alt-text="The policy management page of Azure Security Center showing the default policy assignments."::: -- ## October 2020 Updates in October include:+ - [Vulnerability assessment for on-premise and multicloud machines (preview)](#vulnerability-assessment-for-on-premise-and-multicloud-machines-preview) - [Azure Firewall recommendation added (preview)](#azure-firewall-recommendation-added-preview) - [Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix](#authorized-ip-ranges-should-be-defined-on-kubernetes-services-recommendation-updated-with-quick-fix)
Main capabilities:
[Learn more about Azure Arc-enabled servers](../azure-arc/servers/index.yml). - ### Azure Firewall recommendation added (preview) A new recommendation has been added to protect all your virtual networks with Azure Firewall.
The recommendation, **Virtual networks should be protected by Azure Firewall** a
Learn more about [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/). - ### Authorized IP ranges should be defined on Kubernetes Services recommendation updated with quick fix The recommendation **Authorized IP ranges should be defined on Kubernetes Services** now has a quick fix option.
For more information about this recommendation and all other Security Center rec
:::image type="content" source="./media/release-notes/authorized-ip-ranges-recommendation.png" alt-text="The authorized IP ranges should be defined on Kubernetes Services recommendation with the quick fix option."::: - ### Regulatory compliance dashboard now includes option to remove standards Security Center's regulatory compliance dashboard provides insights into your compliance posture based on how you're meeting specific compliance controls and requirements.
The dashboard includes a default set of regulatory standards. If any of the supp
Learn more in [Remove a standard from your dashboard](update-regulatory-compliance-packages.md#remove-a-standard-from-your-dashboard). - ### Microsoft.Security/securityStatuses table removed from Azure Resource Graph (ARG)
-Azure Resource Graph is a service in Azure that is designed to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
+Azure Resource Graph is a service in Azure that is designed to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.
For Azure Security Center, you can use ARG and the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query a wide range of security posture data. For example:
properties: {
securitystate: "High" } ```+ Whereas, Microsoft.Security/Assessments will hold a record for each such policy assessment as follows: ```
extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id)))))
``` Learn more at the following links:+ - [How to create queries with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) - [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) - ## September 2020 Updates in September include:+ - [Security Center gets a new look!](#security-center-gets-a-new-look) - [Azure Defender released](#azure-defender-released) - [Azure Defender for Key Vault is generally available](#azure-defender-for-key-vault-is-generally-available)
Updates in September include:
- [Secure score doesn't include preview recommendations](#secure-score-doesnt-include-preview-recommendations) - [Recommendations now include a severity indicator and the freshness interval](#recommendations-now-include-a-severity-indicator-and-the-freshness-interval) -
-### Security Center gets a new look!
+### Security Center gets a new look
We've released a refreshed UI for Security Center's portal pages. The new pages include a new overview page and dashboards for secure score, asset inventory, and Azure Defender.
The redesigned overview page now has a tile for accessing the secure score, asse
Learn more about the [overview page](overview-page.md). - ### Azure Defender released
-**Azure Defender** is the cloud workload protection platform (CWPP) integrated within Security Center for advanced, intelligent, protection of your Azure and hybrid workloads. It replaces Security Center's standard pricing tier option.
+**Azure Defender** is the cloud workload protection platform (CWPP) integrated within Security Center for advanced, intelligent, protection of your Azure and hybrid workloads. It replaces Security Center's standard pricing tier option.
When you enable Azure Defender from the **Pricing and settings** area of Azure Security Center, the following Defender plans are all enabled simultaneously and provide comprehensive defenses for the compute, data, and service layers of your environment:
With its dedicated dashboard, Azure Defender provides security alerts and advanc
### Azure Defender for Key Vault is generally available
-Azure Key Vault is a cloud service that safeguards encryption keys and secrets like certificates, connection strings, and passwords.
+Azure Key Vault is a cloud service that safeguards encryption keys and secrets like certificates, connection strings, and passwords.
**Azure Defender for Key Vault** provides Azure-native, advanced threat protection for Azure Key Vault, providing an additional layer of security intelligence. By extension, Azure Defender for Key Vault is consequently protecting many of the resources dependent upon your Key Vault accounts.
Also, the Key Vault pages in the Azure portal now include a dedicated **Security
Learn more in [Azure Defender for Key Vault](defender-for-key-vault-introduction.md). -
-### Azure Defender for Storage protection for Files and ADLS Gen2 is generally available
+### Azure Defender for Storage protection for Files and ADLS Gen2 is generally available
**Azure Defender for Storage** detects potentially harmful activity on your Azure Storage accounts. Your data can be protected whether it's stored as blob containers, file shares, or data lakes.
From 1 October 2020, we'll begin charging for protecting resources on these serv
Learn more in [Azure Defender for Storage](defender-for-storage-introduction.md). - ### Asset inventory tools are now generally available The asset inventory page of Azure Security Center provides a single page for viewing the security posture of the resources you've connected to Security Center.
When any resource has outstanding recommendations, they'll appear in the invento
Learn more in [Explore and manage your resources with asset inventory](asset-inventory.md). -- ### Disable a specific vulnerability finding for scans of container registries and virtual machines Azure Defender includes vulnerability scanners to scan images in your Azure Container Registry and your virtual machines.
This option is available from the recommendations details pages for:
Learn more in [Disable specific findings for your container images](defender-for-containers-usage.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings). - ### Exempt a resource from a recommendation
-Occasionally, a resource will be listed as unhealthy regarding a specific recommendation (and therefore lowering your secure score) even though you feel it shouldn't be. It might have been remediated by a process not tracked by Security Center. Or perhaps your organization has decided to accept the risk for that specific resource.
+Occasionally, a resource will be listed as unhealthy regarding a specific recommendation (and therefore lowering your secure score) even though you feel it shouldn't be. It might have been remediated by a process not tracked by Security Center. Or perhaps your organization has decided to accept the risk for that specific resource.
In such cases, you can create an exemption rule and ensure that resource isn't listed amongst the unhealthy resources in the future. These rules can include documented justifications as described below. Learn more in [Exempt a resource from recommendations and secure score](exempt-resource.md). - ### AWS and GCP connectors in Security Center bring a multicloud experience With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Azure Security Center now protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
-Onboarding your AWS and GCP projects into Security Center, integrates AWS Security Hub, GCP Security Command and Azure Security Center.
+Onboarding your AWS and GCP projects into Security Center, integrates AWS Security Hub, GCP Security Command and Azure Security Center.
Learn more in [Connect your AWS accounts to Azure Security Center](quickstart-onboard-aws.md) and [Connect your GCP projects to Azure Security Center](quickstart-onboard-gcp.md). - ### Kubernetes workload protection recommendation bundle To ensure that Kubernetes workloads are secure by default, Security Center is adding Kubernetes level hardening recommendations, including enforcement options with Kubernetes admission control.
For example, you can mandate that privileged containers shouldn't be created, an
Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#hardening). - ### Vulnerability assessment findings are now available in continuous export Use continuous export to stream your alerts and recommendations to Azure Event Hubs, Log Analytics workspaces, or Azure Monitor. From there, you can integrate this data with SIEMs (such as Azure Sentinel, Power BI, Azure Data Explorer, and more.
-Security Center's integrated vulnerability assessment tools return findings about your resources as actionable recommendations within a 'parent' recommendation such as "Vulnerabilities in your virtual machines should be remediated".
+Security Center's integrated vulnerability assessment tools return findings about your resources as actionable recommendations within a 'parent' recommendation such as "Vulnerabilities in your virtual machines should be remediated".
The security findings are now available for export through continuous export when you select recommendations and enable the **include security findings** option.
Related pages:
### Prevent security misconfigurations by enforcing recommendations when creating new resources
-Security misconfigurations are a major cause of security incidents. Security Center now has the ability to help *prevent* misconfigurations of new resources with regard to specific recommendations.
+Security misconfigurations are a major cause of security incidents. Security Center now has the ability to help *prevent* misconfigurations of new resources with regard to specific recommendations.
This feature can help keep your workloads secure and stabilize your secure score.
Enforcing a secure configuration, based on a specific recommendation, is offered
- Using the **Deny** effect of Azure Policy, you can stop unhealthy resources from being created - Using the **Enforce** option, you can take advantage of Azure Policy's **DeployIfNotExist** effect and automatically remediate non-compliant resources upon creation
-
+ This is available for selected security recommendations and can be found at the top of the resource details page. Learn more in [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md).
-### Network security group recommendations improved
+### Network security group recommendations improved
The following security recommendations related to network security groups have been improved to reduce some instances of false positives.
The following security recommendations related to network security groups have b
- Internet-facing virtual machines should be protected with Network Security Groups - Subnets should be associated with a Network Security Group - ### Deprecated preview AKS recommendation "Pod Security Policies should be defined on Kubernetes Services" The preview recommendation "Pod Security Policies should be defined on Kubernetes Services" is being deprecated as described in the [Azure Kubernetes Service](../aks/use-pod-security-policies.md) documentation.
The pod security policy (preview) feature, is set for deprecation and will no lo
After pod security policy (preview) is deprecated, you must disable the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support. - ### Email notifications from Azure Security Center improved
-The following areas of the emails regarding security alerts have been improved:
+The following areas of the emails regarding security alerts have been improved:
- Added the ability to send email notifications about alerts for all severity levels - Added the ability to notify users with different Azure roles on the subscription
The following areas of the emails regarding security alerts have been improved:
Learn more in [Set up email notifications for security alerts](configure-email-notifications.md). -
-### Secure score doesn't include preview recommendations
+### Secure score doesn't include preview recommendations
Security Center continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can tell, at a glance, your current security situation: the higher the score, the lower the identified risk level.
An example of a preview recommendation:
[Learn more about secure score](secure-score-security-controls.md). - ### Recommendations now include a severity indicator and the freshness interval The details page for recommendations now includes a freshness interval indicator (whenever relevant) and a clear display of the severity of the recommendation. :::image type="content" source="./media/release-notes/recommendations-severity-freshness-indicators.png" alt-text="Recommendation page showing freshness and severity."::: - ## August 2020 Updates in August include:
Updates in August include:
- [Vulnerability assessment on VMs - recommendations and policies consolidated](#vulnerability-assessment-on-vmsrecommendations-and-policies-consolidated) - [New AKS security policies added to ASC_default initiative ΓÇô for use by private preview customers only](#new-aks-security-policies-added-to-asc_default-initiative--for-use-by-private-preview-customers-only) - ### Asset inventory - powerful new view of the security posture of your assets Security Center's asset inventory (currently in preview) provides a way to view the security posture of the resources you've connected to Security Center.
You can use the view and its filters to explore your security posture data and t
Learn more about [asset inventory](asset-inventory.md). - ### Added support for Azure Active Directory security defaults (for multi-factor authentication) Security Center has added full support for [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md), Microsoft's free identity security protections. Security defaults provide preconfigured identity security settings to defend your organization from common identity-related attacks. Security defaults already protecting more than 5 million tenants overall; 50,000 tenants are also protected by Security Center.
-Security Center now provides a security recommendation whenever it identifies an Azure subscription without security defaults enabled. Until now, Security Center recommended enabling multi-factor authentication using conditional access, which is part of the Azure Active Directory (AD) premium license. For customers using Azure AD free, we now recommend enabling security defaults.
+Security Center now provides a security recommendation whenever it identifies an Azure subscription without security defaults enabled. Until now, Security Center recommended enabling multi-factor authentication using conditional access, which is part of the Azure Active Directory (AD) premium license. For customers using Azure AD free, we now recommend enabling security defaults.
Our goal is to encourage more customers to secure their cloud environments with MFA, and mitigate one of the highest risks that is also the most impactful to your [secure score](secure-score-security-controls.md). Learn more about [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md). - ### Service principals recommendation added A new recommendation has been added to recommend that Security Center customers using management certificates to manage their subscriptions switch to service principals.
-The recommendation, **Service principals should be used to protect your subscriptions instead of Management Certificates** advises you to use Service Principals or Azure Resource Manager to more securely manage your subscriptions.
+The recommendation, **Service principals should be used to protect your subscriptions instead of Management Certificates** advises you to use Service Principals or Azure Resource Manager to more securely manage your subscriptions.
Learn more about [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object). - ### Vulnerability assessment on VMs - recommendations and policies consolidated Security Center inspects your VMs to detect whether they're running a vulnerability assessment solution. If no vulnerability assessment solution is found, Security Center provides a recommendation to simplify the deployment.
To ensure a consistent experience for all users, regardless of the scanner type
|**A vulnerability assessment solution should be enabled on your virtual machines**|Replaces the following two recommendations:<br> ***** Enable the built-in vulnerability assessment solution on virtual machines (powered by Qualys (now deprecated) (Included with standard tier)<br> ***** Vulnerability assessment solution should be installed on your virtual machines (now deprecated) (Standard and free tiers)| |**Vulnerabilities in your virtual machines should be remediated**|Replaces the following two recommendations:<br>***** Remediate vulnerabilities found on your virtual machines (powered by Qualys) (now deprecated)<br>***** Vulnerabilities should be remediated by a Vulnerability Assessment solution (now deprecated)| - Now you'll use the same recommendation to deploy Security Center's vulnerability assessment extension or a privately licensed solution ("BYOL") from a partner such as Qualys or Rapid7. Also, when vulnerabilities are found and reported to Security Center, a single recommendation will alert you to the findings regardless of the vulnerability assessment solution that identified them.
If you have scripts, queries, or automations referring to the previous recommend
|**Vulnerability assessment solution should be installed on your virtual machines**<br>Key: 01b1ed4c-b733-4fee-b145-f23236e70cf3|BYOL| |**Vulnerabilities should be remediated by a Vulnerability Assessment solution**<br>Key: 71992a2a-d168-42e0-b10e-6b45fa2ecddb|BYOL| -- |Policy|Scope| |-|:-| |**Vulnerability assessment should be enabled on virtual machines**<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9|Built-in| |**Vulnerabilities should be remediated by a vulnerability assessment solution**<br>Policy ID: 760a85ff-6162-42b3-8d70-698e268f648c|BYOL| -- ##### From August 2020 |Recommendation|Scope|
If you have scripts, queries, or automations referring to the previous recommend
|**A vulnerability assessment solution should be enabled on your virtual machines**<br>Key: ffff0522-1e88-47fc-8382-2a80ba848f5d|Built-in + BYOL| |**Vulnerabilities in your virtual machines should be remediated**<br>Key: 1195afff-c881-495e-9bc5-1486211ae03f|Built-in + BYOL| - |Policy|Scope| |-|:-| |[**Vulnerability assessment should be enabled on virtual machines**](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9 |Built-in + BYOL| -- ### New AKS security policies added to ASC_default initiative ΓÇô for use by private preview customers only To ensure that Kubernetes workloads are secure by default, Security Center is adding Kubernetes level policies and hardening recommendations, including enforcement options with Kubernetes admission control. The early phase of this project includes a private preview and the addition of new (disabled by default) policies to the ASC_default initiative.
-You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview at https://aka.ms/SecurityPrP and select from the following options:
+You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview at <https://aka.ms/SecurityPrP> and select from the following options:
1. **Single Preview** ΓÇô To join only this private preview. Explicitly mention "ASC Continuous Scan" as the preview you would like to join. 1. **Ongoing Program** ΓÇô To be added to this and future private previews. You'll need to complete a profile and privacy agreement. - ## July 2020 Updates in July include:+ - [Vulnerability assessment for virtual machines is now available for non-marketplace images](#vulnerability-assessment-for-virtual-machines-is-now-available-for-non-marketplace-images) - [Threat protection for Azure Storage expanded to include Azure Files and Azure Data Lake Storage Gen2 (preview)](#threat-protection-for-azure-storage-expanded-to-include-azure-files-and-azure-data-lake-storage-gen2-preview) - [Eight new recommendations to enable threat protection features](#eight-new-recommendations-to-enable-threat-protection-features)
Updates in July include:
- [Adaptive application controls updated with a new recommendation and support for wildcards in path rules](#adaptive-application-controls-updated-with-a-new-recommendation-and-support-for-wildcards-in-path-rules) - [Six policies for SQL advanced data security deprecated](#six-policies-for-sql-advanced-data-security-deprecated) --- ### Vulnerability assessment for virtual machines is now available for non-marketplace images
-When deploying a vulnerability assessment solution, Security Center previously performed a validation check before deploying. The check was to confirm a marketplace SKU of the destination virtual machine.
+When deploying a vulnerability assessment solution, Security Center previously performed a validation check before deploying. The check was to confirm a marketplace SKU of the destination virtual machine.
From this update, the check has been removed and you can now deploy vulnerability assessment tools to 'custom' Windows and Linux machines. Custom images are ones that you've modified from the marketplace defaults.
Learn more about the [integrated vulnerability scanner for virtual machines (req
Learn more about using your own privately-licensed vulnerability assessment solution from Qualys or Rapid7 in [Deploying a partner vulnerability scanning solution](deploy-vulnerability-assessment-vm.md). - ### Threat protection for Azure Storage expanded to include Azure Files and Azure Data Lake Storage Gen2 (preview)
-Threat protection for Azure Storage detects potentially harmful activity on your Azure Storage accounts. Security Center displays alerts when it detects attempts to access or exploit your storage accounts.
+Threat protection for Azure Storage detects potentially harmful activity on your Azure Storage accounts. Security Center displays alerts when it detects attempts to access or exploit your storage accounts.
Your data can be protected whether it's stored as blob containers, file shares, or data lakes. --- ### Eight new recommendations to enable threat protection features Eight new recommendations have been added to provide a simple way to enable Azure Security Center's threat protection features for the following resource types: virtual machines, App Service plans, Azure SQL Database servers, SQL servers on machines, Azure Storage accounts, Azure Kubernetes Service clusters, Azure Container Registry registries, and Azure Key Vault vaults.
The new recommendations are:
These new recommendations belong to the **Enable Azure Defender** security control.
-The recommendations also include the quick fix capability.
+The recommendations also include the quick fix capability.
> [!IMPORTANT] > Remediating any of these recommendations will result in charges for protecting the relevant resources. These charges will begin immediately if you have related resources in the current subscription. Or in the future, if you add them at a later date.
->
+>
> For example, if you don't have any Azure Kubernetes Service clusters in your subscription and you enable the threat protection, no charges will be incurred. If, in the future, you add a cluster on the same subscription, it will automatically be protected and charges will begin at that time. Learn more about each of these in the [security recommendations reference page](recommendations-reference.md). Learn more about [threat protection in Azure Security Center](azure-defender.md). --- ### Container security improvements - faster registry scanning and refreshed documentation As part of the continuous investments in the container security domain, we are happy to share a significant performance improvement in Security Center's dynamic scans of container images stored in Azure Container Registry. Scans now typically complete in approximately two minutes. In some cases, they might take up to 15 minutes.
-To improve the clarity and guidance regarding Azure Security Center's container security capabilities, we've also refreshed the container security documentation pages.
+To improve the clarity and guidance regarding Azure Security Center's container security capabilities, we've also refreshed the container security documentation pages.
Learn more about Security Center's container security in the following articles:
Learn more about Security Center's container security in the following articles:
- [Security alerts from the threat protection features for Azure Kubernetes Service clusters](alerts-reference.md#alerts-k8scluster) - [Security recommendations for containers](recommendations-reference.md#recs-compute) -- ### Adaptive application controls updated with a new recommendation and support for wildcards in path rules The adaptive application controls feature has received two significant updates:
-* A new recommendation identifies potentially legitimate behavior that hasn't previously been allowed. The new recommendation, **Allowlist rules in your adaptive application control policy should be updated**, prompts you to add new rules to the existing policy to reduce the number of false positives in adaptive application controls violation alerts.
+- A new recommendation identifies potentially legitimate behavior that hasn't previously been allowed. The new recommendation, **Allowlist rules in your adaptive application control policy should be updated**, prompts you to add new rules to the existing policy to reduce the number of false positives in adaptive application controls violation alerts.
-* Path rules now support wildcards. From this update, you can configure allowed path rules using wildcards. There are two supported scenarios:
+- Path rules now support wildcards. From this update, you can configure allowed path rules using wildcards. There are two supported scenarios:
- * Using a wildcard at the end of a path to allow all executables within this folder and sub-folders
-
- * Using a wildcard in the middle of a path to enable a known executable name with a changing folder name (e.g. personal user folders with a known executable, automatically generated folder names, etc.).
+ - Using a wildcard at the end of a path to allow all executables within this folder and sub-folders
+ - Using a wildcard in the middle of a path to enable a known executable name with a changing folder name (e.g. personal user folders with a known executable, automatically generated folder names, etc.).
[Learn more about adaptive application controls](adaptive-application-controls.md). -- ### Six policies for SQL advanced data security deprecated Six policies related to advanced data security for SQL machines are being deprecated:
Six policies related to advanced data security for SQL machines are being deprec
Learn more about [built-in policies](./policy-reference.md). -- ## June 2020 Updates in June include:
Updates in June include:
- [New recommendation for using NSGs to protect non-internet-facing virtual machines](#new-recommendation-for-using-nsgs-to-protect-non-internet-facing-virtual-machines) - [New policies for enabling threat protection and advanced data security](#new-policies-for-enabling-threat-protection-and-advanced-data-security) -- ### Secure score API (preview) You can now access your score via the [secure score API](/rest/api/securitycenter/securescores/) (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the **Secure Scores** API to get the score for a specific subscription. In addition, you can use the **Secure Score Controls** API to list the security controls and the current score of your subscriptions.
For examples of external tools made possible with the secure score API, see [the
Learn more about [secure score and security controls in Azure Security Center](secure-score-security-controls.md). -- ### Advanced data security for SQL machines (Azure, other clouds, and on-premises) (preview) Azure Security Center's advanced data security for SQL machines now protects SQL Servers hosted in Azure, on other cloud environments, and even on-premises machines. This extends the protections for your Azure-native SQL Servers to fully support hybrid environments.
Set up involves two steps:
Learn more about [advanced data security for SQL machines](defender-for-sql-usage.md). -- ### Two new recommendations to deploy the Log Analytics agent to Azure Arc machines (preview) Two new recommendations have been added to help deploy the [Log Analytics Agent](../azure-monitor/agents/log-analytics-agent.md) to your Azure Arc machines and ensure they're protected by Azure Security Center:
Two new recommendations have been added to help deploy the [Log Analytics Agent]
These new recommendations will appear in the same four security controls as the existing (related) recommendation, **Monitoring agent should be installed on your machines**: remediate security configurations, apply adaptive application control, apply system updates, and enable endpoint protection.
-The recommendations also include the Quick fix capability to help speed up the deployment process.
+The recommendations also include the Quick fix capability to help speed up the deployment process.
Learn more about these two new recommendations in the [Compute and app recommendations](recommendations-reference.md#recs-compute) table.
Learn more about how Azure Security Center uses the agent in [What is the Log An
Learn more about [extensions for Azure Arc machines](../azure-arc/servers/manage-vm-extensions.md). - ### New policies to create continuous export and workflow automation configurations at scale Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
To deploy your automation configurations across your organization, use these bui
The policy definitions can be found in Azure Policy: - |Goal |Policy |Policy ID | |||| |Continuous export to Event Hubs|[Deploy export to Event Hubs for Azure Security Center alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb|
The policy definitions can be found in Azure Policy:
|Workflow automation for security alerts|[Deploy Workflow Automation for Azure Security Center alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e| |Workflow automation for security recommendations|[Deploy Workflow Automation for Azure Security Center recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| - Get started with [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation). Learn more about using the two export policies in [Configure workflow automation at scale using the supplied policies](workflow-automation.md#configure-workflow-automation-at-scale-using-the-supplied-policies) and [Set up a continuous export](continuous-export.md#set-up-a-continuous-export). - ### New recommendation for using NSGs to protect non-internet-facing virtual machines The "implement security best practices" security control now includes the following new recommendation:
An existing recommendation, **Internet-facing virtual machines should be protect
Learn more in the [Network recommendations](recommendations-reference.md#recs-networking) table. --- ### New policies for enabling threat protection and advanced data security The new policy definitions below were added to the ASC Default initiative and are designed to assist with enabling threat protection or advanced data security for the relevant resource types. The policy definitions can be found in Azure Policy: - | Policy | Policy ID | |--|--| | [Advanced data security should be enabled on Azure SQL Database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) | 7fe3b40f-802b-4cdd-8bd4-fd799c948cc2 |
The policy definitions can be found in Azure Policy:
| [Advanced threat protection should be enabled on Azure Kubernetes Service clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f523b5cd1-3e23-492f-a539-13118b6d1e3a) | 523b5cd1-3e23-492f-a539-13118b6d1e3a | | [Advanced threat protection should be enabled on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d) | 4da35fc9-c9e7-4960-aec9-797fe7d9051d | - Learn more about [Threat protection in Azure Security Center](azure-defender.md). - ## May 2020 Updates in May include:+ - [Alert suppression rules (preview)](#alert-suppression-rules-preview) - [Virtual machine vulnerability assessment is now generally available](#virtual-machine-vulnerability-assessment-is-now-generally-available) - [Changes to just-in-time (JIT) virtual machine (VM) access](#changes-to-just-in-time-jit-virtual-machine-vm-access)
Updates in May include:
- [Custom policies with custom metadata are now generally available](#custom-policies-with-custom-metadata-are-now-generally-available) - [Crash dump analysis capabilities migrating to fileless attack detection](#crash-dump-analysis-capabilities-migrating-to-fileless-attack-detection) - ### Alert suppression rules (preview)
-This new feature (currently in preview) helps reduce alert fatigue. Use rules to automatically hide alerts that are known to be innocuous or related to normal activities in your organization. This lets you focus on the most relevant threats.
+This new feature (currently in preview) helps reduce alert fatigue. Use rules to automatically hide alerts that are known to be innocuous or related to normal activities in your organization. This lets you focus on the most relevant threats.
Alerts that match your enabled suppression rules will still be generated, but their state will be set to dismissed. You can see the state in the Azure portal or however you access your Security Center security alerts.
Suppression rules define the criteria for which alerts should be automatically d
Learn more about [suppressing alerts from Azure Security Center's threat protection](alerts-suppression-rules.md). - ### Virtual machine vulnerability assessment is now generally available Security Center's standard tier now includes an integrated vulnerability assessment for virtual machines for no additional fee. This extension is powered by Qualys but reports its findings directly back to Security Center. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Security Center.
-The new solution can continuously scan your virtual machines to find vulnerabilities and present the findings in Security Center.
+The new solution can continuously scan your virtual machines to find vulnerabilities and present the findings in Security Center.
To deploy the solution, use the new security recommendation:
To deploy the solution, use the new security recommendation:
Learn more about [Security Center's integrated vulnerability assessment for virtual machines](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner). -- ### Changes to just-in-time (JIT) virtual machine (VM) access Security Center includes an optional feature to protect the management ports of your VMs. This provides a defense against the most common form of brute force attacks.
This update brings the following changes to this feature:
Learn more about [the JIT access feature](just-in-time-access-usage.md). - ### Custom recommendations have been moved to a separate security control
-One security control introduced with the enhanced secure score was "Implement security best practices". Any custom recommendations created for your subscriptions were automatically placed in that control.
+One security control introduced with the enhanced secure score was "Implement security best practices". Any custom recommendations created for your subscriptions were automatically placed in that control.
To make it easier to find your custom recommendations, we've moved them into a dedicated security control, "Custom recommendations". This control has no impact on your secure score. Learn more about security controls in [Enhanced secure score (preview) in Azure Security Center](secure-score-security-controls.md). - ### Toggle added to view recommendations in controls or as a flat list Security controls are logical groups of related security recommendations. They reflect your vulnerable attack surfaces. A control is a set of security recommendations, with instructions that help you implement those recommendations.
Learn more about security controls in [Enhanced secure score (preview) in Azure
:::image type="content" source="./media/secure-score-security-controls/recommendations-group-by-toggle.gif" alt-text="Group by controls toggle for recommendations.":::
-### Expanded security control "Implement security best practices"
+### Expanded security control "Implement security best practices"
-One security control introduced with the enhanced secure score is "Implement security best practices". When a recommendation is in this control, it doesn't impact the secure score.
+One security control introduced with the enhanced secure score is "Implement security best practices". When a recommendation is in this control, it doesn't impact the secure score.
With this update, three recommendations have moved out of the controls in which they were originally placed, and into this best practices control. We've taken this step because we've determined that the risk of these three recommendations is lower than was initially thought.
Learn more about Windows Defender Exploit Guard in [Create and deploy an Exploit
Learn more about security controls in [Enhanced secure score (preview)](secure-score-security-controls.md). -- ### Custom policies with custom metadata are now generally available
-Custom policies are now part of the Security Center recommendations experience, secure score, and the regulatory compliance standards dashboard. This feature is now generally available and allows you to extend your organization's security assessment coverage in Security Center.
+Custom policies are now part of the Security Center recommendations experience, secure score, and the regulatory compliance standards dashboard. This feature is now generally available and allows you to extend your organization's security assessment coverage in Security Center.
Create a custom initiative in Azure Policy, add policies to it and onboard it to Azure Security Center, and visualize it as recommendations.
We've now also added the option to edit the custom recommendation metadata. Meta
Learn more about [enhancing your custom recommendations with detailed information](custom-security-policies.md#enhance-your-custom-recommendations-with-detailed-information). --
-### Crash dump analysis capabilities migrating to fileless attack detection
+### Crash dump analysis capabilities migrating to fileless attack detection
We are integrating the Windows crash dump analysis (CDA) detection capabilities into [fileless attack detection](defender-for-servers-introduction.md#what-are-the-benefits-of-defender-for-servers). Fileless attack detection analytics brings improved versions of the following security alerts for Windows machines: Code injection discovered, Masquerading Windows Module Detected, Shell code discovered, and Suspicious code segment detected. Some of the benefits of this transition: -- **Proactive and timely malware detection** - The CDA approach involved waiting for a crash to occur and then running analysis to find malicious artifacts. Using fileless attack detection brings proactive identification of in-memory threats while they are running.
+- **Proactive and timely malware detection** - The CDA approach involved waiting for a crash to occur and then running analysis to find malicious artifacts. Using fileless attack detection brings proactive identification of in-memory threats while they are running.
-- **Enriched alerts** - The security alerts from fileless attack detection include enrichments that aren't available from CDA, such as the active network connections information.
+- **Enriched alerts** - The security alerts from fileless attack detection include enrichments that aren't available from CDA, such as the active network connections information.
- **Alert aggregation** - When CDA detected multiple attack patterns within a single crash dump, it triggered multiple security alerts. Fileless attack detection combines all of the identified attack patterns from the same process into a single alert, removing the need to correlate multiple alerts. - **Reduced requirements on your Log Analytics workspace** - Crash dumps containing potentially sensitive data will no longer be uploaded to your Log Analytics workspace. ----- ## April 2020 Updates in April include:+ - [Dynamic compliance packages are now generally available](#dynamic-compliance-packages-are-now-generally-available) - [Identity recommendations now included in Azure Security Center free tier](#identity-recommendations-now-included-in-azure-security-center-free-tier) - ### Dynamic compliance packages are now generally available The Azure Security Center regulatory compliance dashboard now includes **dynamic compliance packages** (now generally available) to track additional industry and regulatory standards.
Now, you can add standards such as:
- **Azure CIS 1.1.0 (new)** (which is a more complete representation of Azure CIS 1.1.0) In addition, we've recently added the [Azure Security Benchmark](/security/benchmark/azure/introduction), the Microsoft-authored Azure-specific guidelines for security and compliance best practices based on common compliance frameworks. Additional standards will be supported in the dashboard as they become available.
-
-Learn more about [customizing the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
+Learn more about [customizing the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md).
### Identity recommendations now included in Azure Security Center free tier
Learn more about [identity and access recommendations](recommendations-reference
Learn more about [Managing multi-factor authentication (MFA) enforcement on your subscriptions](multi-factor-authentication-enforcement.md). -- ## March 2020 Updates in March include:
Updates in March include:
- [Improved just-in-time experience](#improved-just-in-time-experience) - [Two security recommendations for web applications deprecated](#two-security-recommendations-for-web-applications-deprecated) - ### Workflow automation is now generally available The workflow automation feature of Azure Security Center is now generally available. Use it to automatically trigger Logic Apps on security alerts and recommendations. In addition, manual triggers are available for alerts and all recommendations that have the quick fix option available.
For more information about the automatic and manual Security Center capabilities
Learn more about [creating Logic Apps](../logic-apps/logic-apps-overview.md). - ### Integration of Azure Security Center with Windows Admin Center It's now possible to move your on-premises Windows servers from the Windows Admin Center directly to the Azure Security Center. Security Center then becomes your single pane of glass to view security information for all your Windows Admin Center resources, including on-premises servers, virtual machines, and additional PaaS workloads.
After moving a server from Windows Admin Center to Azure Security Center, you'll
Learn more about [how to integrate Azure Security Center with Windows Admin Center](windows-admin-center-integration.md). - ### Protection for Azure Kubernetes Service Azure Security Center is expanding its container security features to protect Azure Kubernetes Service (AKS).
Learn more about [Azure Kubernetes Services' integration with Security Center](d
Learn more about [the container security features in Security Center](defender-for-containers-introduction.md). - ### Improved just-in-time experience
-The features, operation, and UI for Azure Security Center's just-in-time tools that secure your management ports have been enhanced as follows:
+The features, operation, and UI for Azure Security Center's just-in-time tools that secure your management ports have been enhanced as follows:
-- **Justification field** - When requesting access to a virtual machine (VM) through the just-in-time page of the Azure portal, a new optional field is available to enter a justification for the request. Information entered into this field can be tracked in the activity log. -- **Automatic cleanup of redundant just-in-time (JIT) rules** - Whenever you update a JIT policy, a cleanup tool automatically runs to check the validity of your entire ruleset. The tool looks for mismatches between rules in your policy and rules in the NSG. If the cleanup tool finds a mismatch, it determines the cause and, when it's safe to do so, removes built-in rules that aren't needed anymore. The cleaner never deletes rules that you've created.
+- **Justification field** - When requesting access to a virtual machine (VM) through the just-in-time page of the Azure portal, a new optional field is available to enter a justification for the request. Information entered into this field can be tracked in the activity log.
+- **Automatic cleanup of redundant just-in-time (JIT) rules** - Whenever you update a JIT policy, a cleanup tool automatically runs to check the validity of your entire ruleset. The tool looks for mismatches between rules in your policy and rules in the NSG. If the cleanup tool finds a mismatch, it determines the cause and, when it's safe to do so, removes built-in rules that aren't needed anymore. The cleaner never deletes rules that you've created.
Learn more about [the JIT access feature](just-in-time-access-usage.md). - ### Two security recommendations for web applications deprecated
-Two security recommendations related to web applications are being deprecated:
+Two security recommendations related to web applications are being deprecated:
- The rules for web applications on IaaS NSGs should be hardened. (Related policy: The NSGs rules for web applications on IaaS should be hardened)
These recommendations will no longer appear in the Security Center list of recom
Learn more about [security recommendations](recommendations-reference.md). --- ## February 2020 ### Fileless attack detection for Linux (preview)
As attackers increasing employ stealthier methods to avoid detection, Azure Secu
- minimize or eliminate traces of malware on disk - greatly reduce the chances of detection by disk-based malware scanning solutions
-To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018, and has now extended fileless attack detection on Linux as well.
--
+To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018, and has now extended fileless attack detection on Linux as well.
## January 2020
Familiarize yourself with the secure score changes during the preview phase and
Learn more about [enhanced secure score (preview)](secure-score-security-controls.md). -- ## November 2019 Updates in November include:+
+- [Threat Protection for Azure Key Vault in North America regions (preview)](#threat-protection-for-azure-key-vault-in-north-america-regions-preview)
+- [Threat Protection for Azure Storage includes Malware Reputation Screening](#threat-protection-for-azure-storage-includes-malware-reputation-screening)
+- [Workflow automation with Logic Apps (preview)](#workflow-automation-with-logic-apps-preview)
+- [Quick Fix for bulk resources generally available](#quick-fix-for-bulk-resources-generally-available)
+- [Scan container images for vulnerabilities (preview)](#scan-container-images-for-vulnerabilities-preview)
+- [Additional regulatory compliance standards (preview)](#additional-regulatory-compliance-standards-preview)
+- [Threat Protection for Azure Kubernetes Service (preview)](#threat-protection-for-azure-kubernetes-service-preview)
+- [Virtual machine vulnerability assessment (preview)](#virtual-machine-vulnerability-assessment-preview)
+- [Advanced data security for SQL servers on Azure Virtual Machines (preview)](#advanced-data-security-for-sql-servers-on-azure-virtual-machines-preview)
+- [Support for custom policies (preview)](#support-for-custom-policies-preview)
+- [Extending Azure Security Center coverage with platform for community and partners](#extending-azure-security-center-coverage-with-platform-for-community-and-partners)
+- [Advanced integrations with export of recommendations and alerts (preview)](#advanced-integrations-with-export-of-recommendations-and-alerts-preview)
+- [Onboard on-prem servers to Security Center from Windows Admin Center (preview)](#onboard-on-prem-servers-to-security-center-from-windows-admin-center-preview)
### Threat Protection for Azure Key Vault in North America Regions (preview)
Azure Key Vault is an essential service for protecting data and improving perfor
Azure Security Center's support for Threat Protection for Azure Key Vault provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit key vaults. This new layer of protection allows customers to address threats against their key vaults without being a security expert or manage security monitoring systems. The feature is in public preview in North America Regions. - ### Threat Protection for Azure Storage includes Malware Reputation Screening Threat protection for Azure Storage offers new detections powered by Microsoft Threat Intelligence for detecting malware uploads to Azure Storage using hash reputation analysis and suspicious access from an active Tor exit node (an anonymizing proxy). You can now view detected malware across storage accounts using Azure Security Center. - ### Workflow automation with Logic Apps (preview) Organizations with centrally managed security and IT/operations implement internal workflow processes to drive required action within the organization when discrepancies are discovered in their environments. In many cases, these workflows are repeatable processes and automation can greatly streamline processes within the organization.
For more information about the automatic and manual Security Center capabilities
To learn about creating Logic Apps, see [Azure Logic Apps](../logic-apps/logic-apps-overview.md). - ### Quick Fix for bulk resources generally available With the many tasks that a user is given as part of Secure Score, the ability to effectively remediate issues across a large fleet can become challenging.
Quick fix is generally available today customers as part of the Security Center
See which recommendations have quick fix enabled in the [reference guide to security recommendations](recommendations-reference.md). - ### Scan container images for vulnerabilities (preview) Azure Security Center can now scan container images in Azure Container Registry for vulnerabilities.
The image scanning works by parsing the container image file, then checking to s
The scan itself is automatically triggered when pushing new container images to Azure Container Registry. Found vulnerabilities will surface as Security Center recommendations and included in the secure score together with information on how to patch them to reduce the attack surface they allowed. - ### Additional regulatory compliance standards (preview) The Regulatory Compliance dashboard provides insights into your compliance posture based on Security Center assessments. The dashboard shows how your environment complies with controls and requirements designated by specific regulatory standards and industry benchmarks and provides prescriptive recommendations for how to address these requirements.
The regulatory compliance dashboard has thus far supported four built-in standar
[Learn more about customizing the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md). - ### Threat Protection for Azure Kubernetes Service (preview) Kubernetes is quickly becoming the new standard for deploying and managing software in the cloud. Few people have extensive experience with Kubernetes and many only focuses on general engineering and administration and overlook the security aspect. Kubernetes environment needs to be configured carefully to be secure, making sure no container focused attack surface doors are not left open is exposed for attackers. Security Center is expanding its support in the container space to one of the fastest growing services in Azure - Azure Kubernetes Service (AKS).
The new capabilities in this public preview release include:
- **Secure Score recommendations** - Actionable items to help customers comply with security best practices for AKS, and increase their secure score. Recommendations include items such as "Role-based access control should be used to restrict access to a Kubernetes Service Cluster". - **Threat Detection** - Host and cluster-based analytics, such as "A privileged container detected". - ### Virtual machine vulnerability assessment (preview) Applications that are installed in virtual machines could often have vulnerabilities that could lead to a breach of the virtual machine. We are announcing that the Security Center standard tier includes built-in vulnerability assessment for virtual machines for no additional fee. The vulnerability assessment, powered by Qualys in the public preview, will allow you to continuously scan all the installed applications on a virtual machine to find vulnerable applications and present the findings in the Security Center portal's experience. Security Center takes care of all deployment operations so that no extra work is required from the user. Going forward we are planning to provide vulnerability assessment options to support our customers' unique business needs. [Learn more about vulnerability assessments for your Azure Virtual Machines](deploy-vulnerability-assessment-vm.md). - ### Advanced data security for SQL servers on Azure Virtual Machines (preview) Azure Security Center's support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
Azure Security Center's support for threat protection and vulnerability assessme
[Advanced threat protection](/azure/azure-sql/database/threat-detection-overview) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit your SQL server. It continuously monitors your database for suspicious activities and provides action-oriented security alerts on anomalous database access patterns. These alerts provide the suspicious activity details and recommended actions to investigate and mitigate the threat. - ### Support for custom policies (preview) Azure Security Center now supports custom policies (in preview).
Our customers have been wanting to extend their current security assessments cov
These new policies will be part of the Security Center recommendations experience, Secure Score, and the regulatory compliance standards dashboard. With the support for custom policies, you're now able to create a custom initiative in Azure Policy, then add it as a policy in Security Center and visualize it as a recommendation. - ### Extending Azure Security Center coverage with platform for community and partners Use Security Center to receive recommendations not only from Microsoft but also from existing solutions from partners such as Check Point, Tenable, and CyberArk with many more integrations coming. Security Center's simple onboarding flow can connect your existing solutions to Security Center, enabling you to view your security posture recommendations in a single place, run unified reports and leverage all of Security Center's capabilities against both built-in and partner recommendations. You can also export Security Center recommendations to partner products. [Learn more about Microsoft Intelligent Security Association](https://www.microsoft.com/security/partnerships/intelligent-security-association). -- ### Advanced integrations with export of recommendations and alerts (preview) In order to enable enterprise level scenarios on top of Security Center, it's now possible to consume Security Center alerts and recommendations in additional places except the Azure portal or API. These can be directly exported to an event hub and to Log Analytics workspaces. Here are a few workflows you can create around these new capabilities:
In order to enable enterprise level scenarios on top of Security Center, it's no
- With export to Log Analytics workspace, you can create custom dashboards with Power BI. - With export to Event Hubs, you'll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution, or Azure Data Explorer. - ### Onboard on-prem servers to Security Center from Windows Admin Center (preview) Windows Admin Center is a management portal for Windows Servers who are not deployed in Azure offering them several Azure management capabilities such as backup and system updates. We have recently added an ability to onboard these non-Azure servers to be protected by ASC directly from the Windows Admin Center experience. With this new experience users will be to onboard a WAC server to Azure Security Center and enable viewing its security alerts and recommendations directly in the Windows Admin Center experience. - ## September 2019 Updates in September include:
+- [Managing rules with adaptive application controls improvements](#managing-rules-with-adaptive-application-controls-improvements)
+- [Control container security recommendation using Azure Policy](#control-container-security-recommendation-using-azure-policy)
### Managing rules with adaptive application controls improvements
The experience of managing rules for virtual machines using adaptive application
[Learn more about adaptive application controls](adaptive-application-controls.md). - ### Control container security recommendation using Azure Policy Azure Security Center's recommendation to remediate vulnerabilities in container security can now be enabled or disabled via Azure Policy. To view your enabled security policies, from Security Center open the Security Policy page. - ## August 2019 Updates in August include:
+- [Just-in-time (JIT) VM access for Azure Firewall](#just-in-time-jit-vm-access-for-azure-firewall)
+- [Single click remediation to boost your security posture (preview)](#single-click-remediation-to-boost-your-security-posture-preview)
+- [Cross-tenant management](#cross-tenant-management)
-### Just-in-time (JIT) VM access for Azure Firewall
+### Just-in-time (JIT) VM access for Azure Firewall
Just-in-time (JIT) VM access for Azure Firewall is now generally available. Use it to secure your Azure Firewall protected environments in addition to your NSG protected environments.
Requests are logged in the Azure Activity Log, so you can easily monitor and aud
[Learn more about Azure Firewall](../firewall/overview.md). - ### Single click remediation to boost your security posture (preview) Secure score is a tool that helps you assess your workload security posture. It reviews your security recommendations and prioritizes them for you, so you know which recommendations to perform first. This helps you find the most serious security vulnerabilities to prioritize investigation.
This operation will allow you to select the resources you want to apply the reme
See which recommendations have quick fix enabled in the [reference guide to security recommendations](recommendations-reference.md). - ### Cross-tenant management
-Security Center now supports cross-tenant management scenarios as part of Azure Lighthouse. This enables you to gain visibility and manage the security posture of multiple tenants in Security Center.
+Security Center now supports cross-tenant management scenarios as part of Azure Lighthouse. This enables you to gain visibility and manage the security posture of multiple tenants in Security Center.
[Learn more about cross-tenant management experiences](cross-tenant-management.md). - ## July 2019 ### Updates to network recommendations
-Azure Security Center (ASC) has launched new networking recommendations and improved some existing ones. Now, using Security Center ensures even greater networking protection for your resources.
+Azure Security Center (ASC) has launched new networking recommendations and improved some existing ones. Now, using Security Center ensures even greater networking protection for your resources.
[Learn more about network recommendations](recommendations-reference.md#recs-networking). - ## June 2019 ### Adaptive network hardening - generally available
-One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
+One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public Internet. Our customers find it hard to know which Network Security Group (NSG) rules should be in place to make sure that Azure workloads are only available to required source ranges. With this feature, Security Center learns the network traffic and connectivity patterns of Azure workloads and provides NSG rule recommendations, for Internet facing virtual machines. This helps our customer better configure their network access policies and limit their exposure to attacks.
[Learn more about adaptive network hardening](adaptive-network-hardening.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Last updated 05/30/2022
Defender for Cloud is in active development and receives improvements on an ongoing basis. To stay up to date with the most recent developments, this page provides you with information about new features, bug fixes, and deprecated functionality.
-This page is updated frequently, so revisit it often.
+This page is updated frequently, so revisit it often.
-To learn about *planned* changes that are coming soon to Defender for Cloud, see [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md).
+To learn about *planned* changes that are coming soon to Defender for Cloud, see [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md).
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in June include:
+- [Drive implementation of security recommendations to enhance your security posture](#drive-implementation-of-security-recommendations-to-enhance-your-security-posture)
+- [Filter security alerts by IP address](#filter-security-alerts-by-ip-address)
- [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments)
+### Drive implementation of security recommendations to enhance your security posture
+
+Today's increasing threats to organizations stretch the limits of security personnel to protect their expanding workloads. Security teams are challenged to implement the protections defined in their security policies.
+
+Now with the governance experience, security teams can assign remediation of security recommendations to the resource owners and require a remediation schedule. They can have full transparency into the progress of the remediation and get notified when tasks are overdue.
+
+Learn more about the governance experience in [Driving your organization to remediate security issues with recommendation governance](governance-rules.md).
+
+### Filter security alerts by IP address
+
+In many cases of attacks, you want to track alerts based on the IP address of the entity involved in the attack. Up until now, the IP appeared only in the "Related Entities" section in the single alert blade. Now, you can filter the alerts in the security alerts blade to see the alerts related to the IP address, and you can search for a specific IP address.
++ ### General availability (GA) of Defender for SQL on machines for AWS and GCP environments The database protection capabilities provided by Microsoft Defender for Cloud, has added support for your SQL servers that are hosted in either AWS or GCP environments.
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#
The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster. > [!NOTE]
-> This option is included in [Azure CLI 3.7 and above](https://docs.microsoft.com/cli/azure/update-azure-cli).
+> This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli).
## April 2022
To learn more, see [Stream alerts to Splunk and QRadar](export-to-siem.md#stream
### Deprecated the Azure Cache for Redis recommendation
-The recommendation `Azure Cache for Redis should reside within a virtual network` (Preview) has been deprecated. WeΓÇÖve changed our guidance for securing Azure Cache for Redis instances. We recommend the use of a private endpoint to restrict access to your Azure Cache for Redis instance, instead of a virtual network.
+The recommendation `Azure Cache for Redis should reside within a virtual network` (Preview) has been deprecated. WeΓÇÖve changed our guidance for securing Azure Cache for Redis instances. We recommend the use of a private endpoint to restrict access to your Azure Cache for Redis instance, instead of a virtual network.
### New alert variant for Microsoft Defender for Storage (preview) to detect exposure of sensitive data
The new alert, `Publicly accessible storage containers with potentially sensitiv
|--|--|--|--| |**PREVIEW - Publicly accessible storage containers with potentially sensitive data have been exposed** <br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery.Sensitive)| Someone has scanned your Azure Storage account and exposed container(s) that allow public access. One or more of the exposed containers have names that indicate that they may contain sensitive data. <br> <br> This usually indicates reconnaissance by a threat actor that is scanning for misconfigured publicly accessible storage containers that may contain sensitive data. <br> <br> After a threat actor successfully discovers a container, they may continue by exfiltrating the data. <br> Γ£ö Azure Blob Storage <br> Γ£û Azure Files <br> Γ£û Azure Data Lake Storage Gen2 | Collection | High |
-### Container scan alert title augmented with IP address reputation
+### Container scan alert title augmented with IP address reputation
-An IP address's reputation can indicate whether the scanning activity originates from a known threat actor, or from an actor that is using the Tor network to hide their identity. Both of these indicators, suggest that there's malicious intent. The IP address's reputation is provided by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684).
+An IP address's reputation can indicate whether the scanning activity originates from a known threat actor, or from an actor that is using the Tor network to hide their identity. Both of these indicators, suggest that there's malicious intent. The IP address's reputation is provided by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684).
The addition of the IP address's reputation to the alert title provides a way to quickly evaluate the intent of the actor, and thus the severity of the threat.
-The following alerts will include this information:
+The following alerts will include this information:
-- `Publicly accessible storage containers have been exposed`
+- `Publicly accessible storage containers have been exposed`
- `Publicly accessible storage containers with potentially sensitive data have been exposed` - `Publicly accessible storage containers have been scanned. No publicly accessible data was discovered`
-For example, the added information to the title of the `Publicly accessible storage containers have been exposed` alert will look like this:
+For example, the added information to the title of the `Publicly accessible storage containers have been exposed` alert will look like this:
- `Publicly accessible storage containers have been exposed`**`by a suspicious IP address`** -- `Publicly accessible storage containers have been exposed`**`by a Tor exit node`**
+- `Publicly accessible storage containers have been exposed`**`by a Tor exit node`**
All of the alerts for Microsoft Defender for Storage will continue to include threat intelligence information in the IP entity under the alert's Related Entities section.
Updates in February include:
- [Microsoft Defender for Azure Cosmos DB plan released for preview](#microsoft-defender-for-azure-cosmos-db-plan-released-for-preview) - [Threat protection for Google Kubernetes Engine (GKE) clusters](#threat-protection-for-google-kubernetes-engine-gke-clusters)
-### Kubernetes workload protection for Arc-enabled Kubernetes clusters
+### Kubernetes workload protection for Arc-enabled Kubernetes clusters
Defender for Containers previously only protected Kubernetes workloads running in Azure Kubernetes Service. We've now extended the protective coverage to include Azure Arc-enabled Kubernetes clusters.
Learn how to protect, and [connect your GCP projects](quickstart-onboard-gcp.md)
### Microsoft Defender for Azure Cosmos DB plan released for preview
-We have extended Microsoft Defender for CloudΓÇÖs database coverage. You can now enable protection for your Azure Cosmos DB databases.
+We have extended Microsoft Defender for CloudΓÇÖs database coverage. You can now enable protection for your Azure Cosmos DB databases.
-Microsoft Defender for Azure Cosmos DB is an Azure-native layer of security that detects any attempt to exploit databases in your Azure Cosmos DB accounts. Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders.
+Microsoft Defender for Azure Cosmos DB is an Azure-native layer of security that detects any attempt to exploit databases in your Azure Cosmos DB accounts. Microsoft Defender for Azure Cosmos DB detects potential SQL injections, known bad actors based on Microsoft Threat Intelligence, suspicious access patterns, and potential exploitation of your database through compromised identities, or malicious insiders.
It continuously analyzes the customer data stream generated by the Azure Cosmos DB services.
-When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
-There's no impact on database performance when enabling the service, because Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data.
+There's no impact on database performance when enabling the service, because Defender for Azure Cosmos DB doesn't access the Azure Cosmos DB account data.
Learn more at [Introduction to Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md).
Learn how to [enable your database security at the subscription level](quickstar
Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features. - ## January 2022 Updates in January include:
Microsoft Defender for Resource Manager automatically monitors the resource mana
The plan's protections greatly enhance an organization's resiliency against attacks from threat actors and significantly increase the number of Azure resources protected by Defender for Cloud.
-In December 2020, we introduced the preview of Defender for Resource Manager, and in May 2021 the plan was release for general availability.
+In December 2020, we introduced the preview of Defender for Resource Manager, and in May 2021 the plan was release for general availability.
With this update, we've comprehensively revised the focus of the Microsoft Defender for Resource Manager plan. The updated plan includes many **new alerts focused on identifying suspicious invocation of high-risk operations**. These new alerts provide extensive monitoring for attacks across the *complete* [MITRE ATT&CK® matrix for cloud-based techniques](https://attack.mitre.org/matrices/enterprise/cloud/).
The new alerts for this Defender plan cover these intentions as shown in the fol
| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium | | **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium | - In addition, these two alerts from this plan have come out of preview: | Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
In addition, these two alerts from this plan have come out of preview:
| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium | | **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium | -- ### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
-To benefit from all of the security features available from [Microsoft Defender for Servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
+To benefit from all of the security features available from [Microsoft Defender for Servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
-When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
+When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
-We've added two recommendations that highlight workspaces without these plans enabled, that nevertheless have machines reporting to them from subscriptions that *do* have the plan enabled.
+We've added two recommendations that highlight workspaces without these plans enabled, that nevertheless have machines reporting to them from subscriptions that *do* have the plan enabled.
The two recommendations, which both offer automated remediation (the 'Fix' action), are:
The two recommendations, which both offer automated remediation (the 'Fix' actio
|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium | |[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for Servers</a>.<br />(No related policy) |Medium | -- ### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview) Defender for Cloud uses the Log Analytics agent to gather security-related data from machines. The agent reads various security-related configurations and event logs and copies the data to your workspace for analysis.
-Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension, including the Log Analytics agent.
+Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension, including the Log Analytics agent.
-In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
+In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
As with the other auto provisioning options, this is configured at the subscription level.
-When you enable this option, you'll be prompted for the workspace.
+When you enable this option, you'll be prompted for the workspace.
> [!NOTE] > For this preview, you can't select the default workspaces that was created by Defender for Cloud. To ensure you receive the full set of security features available for the Azure Arc-enabled servers, verify that you have the relevant security solution installed on the selected workspace.
We've removed the recommendation **Sensitive data in your SQL databases should b
Advance notice of this change appeared for the last six months in the [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md) page.
-### Communication with suspicious domain alert expanded to included known Log4Shell-related domains
+### Communication with suspicious domain alert expanded to included known Log4Shell-related domains
-The following alert was previously only available to organizations who had enabled the [Microsoft Defender for DNS](defender-for-dns-introduction.md) plan.
+The following alert was previously only available to organizations who had enabled the [Microsoft Defender for DNS](defender-for-dns-introduction.md) plan.
With this update, the alert will also show for subscriptions with the [Microsoft Defender for Servers](defender-for-servers-introduction.md) or [Defender for App Service](defender-for-app-service-introduction.md) plan enabled.
The new **Copy alert JSON** button puts the alertΓÇÖs details, in JSON format, i
For consistency with other recommendation names, we've renamed the following two recommendations: - Recommendation to resolve vulnerabilities discovered in running container images
- - Previous name: Vulnerabilities in running container images should be remediated (powered by Qualys)
- - New name: Running container images should have vulnerability findings resolved
+ - Previous name: Vulnerabilities in running container images should be remediated (powered by Qualys)
+ - New name: Running container images should have vulnerability findings resolved
- Recommendation to enable diagnostic logs for Azure App Service
- - Previous name: Diagnostic logs should be enabled in App Service
- - New name: Diagnostic logs in App Service should be enabled
+ - Previous name: Diagnostic logs should be enabled in App Service
+ - New name: Diagnostic logs in App Service should be enabled
### Deprecate Kubernetes cluster containers should only listen on allowed ports policy
The active alerts workbook allows users to view a unified dashboard of their agg
The 'System updates should be installed on your machines' recommendation is now available on all government clouds. It's likely that this change will impact your government cloud subscription's secure score. We expect the change to lead to a decreased score, but it's possible the recommendation's inclusion might result in an increased score in some cases.-
-## December 2021
-
-Updates in December include:
--- [Microsoft Defender for Containers plan released for general availability (GA)](#microsoft-defender-for-containers-plan-released-for-general-availability-ga)-- [New alerts for Microsoft Defender for Storage released for general availability (GA)](#new-alerts-for-microsoft-defender-for-storage-released-for-general-availability-ga)-- [Improvements to alerts for Microsoft Defender for Storage](#improvements-to-alerts-for-microsoft-defender-for-storage)-- ['PortSweeping' alert removed from network layer alerts](#portsweeping-alert-removed-from-network-layer-alerts)-
-### Microsoft Defender for Containers plan released for general availability (GA)
-
-Over two years ago, we introduced [Defender for Kubernetes](defender-for-kubernetes-introduction.md) and [Defender for container registries](defender-for-container-registries-introduction.md) as part of the Azure Defender offering within Microsoft Defender for Cloud.
-
-With the release of [Microsoft Defender for Containers](defender-for-containers-introduction.md), we've merged these two existing Defender plans.
-
-The new plan:
--- **Combines the features of the two existing plans** - threat detection for Kubernetes clusters and vulnerability assessment for images stored in container registries-- **Brings new and improved features** - including multicloud support, host level threat detection with over **sixty** new Kubernetes-aware analytics, and vulnerability assessment for running images-- **Introduces Kubernetes-native at-scale onboarding** - by default, when you enable the plan all relevant components are configured to be deployed automatically-
-With this release, the availability and presentation of Defender for Kubernetes and Defender for container registries has changed as follows:
--- New subscriptions - The two previous container plans are no longer available-- Existing subscriptions - Wherever they appear in the Azure portal, the plans are shown as **Deprecated** with instructions for how to upgrade to the newer plan
- :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-
-The new plan is free for the month of December 2021. For the potential changes to the billing from the old plans to Defender for Containers, and for more information on the benefits introduced with this plan, see [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317).
-
-For more information, see:
--- [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md)-- [Enable Microsoft Defender for Containers](defender-for-containers-enable.md)-- [Introducing Microsoft Defender for Containers - Microsoft Tech Community](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317)-- [Microsoft Defender for Containers | Defender for Cloud in the Field #3 - YouTube](https://www.youtube.com/watch?v=KeH0a3enLJ0&t=201s)--
-### New alerts for Microsoft Defender for Storage released for general availability (GA)
-
-Threat actors use tools and scripts to scan for publicly open containers in the hope of finding misconfigured open storage containers with sensitive data.
-
-Microsoft Defender for Storage detects these scanners so that you can block them and remediate your posture.
-
-The preview alert that detected this was called **ΓÇ£Anonymous scan of public storage containersΓÇ¥**. To provide greater clarity about the suspicious events discovered, we've divided this into **two** new alerts. These alerts are relevant to Azure Blob Storage only.
-
-We've improved the detection logic, updated the alert metadata, and changed the alert name and alert type.
-
-These are the new alerts:
-
-| Alert (alert type) | Description | MITRE tactic | Severity |
-|||--|-|
-| **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium |
-| **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low |
--
-For more information, see:
-- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)-- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)-- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)--
-### Improvements to alerts for Microsoft Defender for Storage
-
-The initial access alerts now have improved accuracy and more data to support investigation.
-
-Threat actors use various techniques in the initial access to gain a foothold within a network. Two of the [Microsoft Defender for Storage](defender-for-storage-introduction.md) alerts that detect behavioral anomalies in this stage now have improved detection logic and additional data to support investigations.
-
-If you've [configured automations](workflow-automation.md) or defined [alert suppression rules](alerts-suppression-rules.md) for these alerts in the past, update them in accordance with these changes.
-
-#### Detecting access from a Tor exit node
-
-Access from a Tor exit node might indicate a threat actor trying to hide their identity.
-
-The alert is now tuned to generate only for authenticated access, which results in higher accuracy and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
-
-An outlying pattern will have high severity, while less anomalous patterns will have medium severity.
-
-The alert name and description have been updated. The AlertType remains unchanged.
--- Alert name (old): Access from a Tor exit node to a storage account -- Alert name (new): Authenticated access from a Tor exit node -- Alert types: Storage.Blob_TorAnomaly / Storage.Files_TorAnomaly-- Description: One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity. Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2-- MITRE tactic: Initial access-- Severity: High/Medium-
-#### Unusual unauthenticated access
-
-A change in access patterns may indicate that a threat actor was able to exploit public read access to storage containers, either by exploiting a mistake in access configurations, or by changing the access permissions.
-
-This medium severity alert is now tuned with improved behavioral logic, higher accuracy, and confidence that the activity is malicious. This enhancement reduces the benign positive rate.
-
-The alert name and description have been updated. The AlertType remains unchanged.
-
-- Alert name (old): Anonymous access to a storage account -- Alert name (new): Unusual unauthenticated access to a storage container-- Alert types: Storage.Blob_AnonymousAccessAnomaly-- Description: This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s). Applies to: Azure Blob Storage-- MITRE tactic: Collection -- Severity: Medium-
-For more information, see:
--- [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)-- [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md)-- [List of alerts provided by Microsoft Defender for Storage](alerts-reference.md#alerts-azurestorage)--
-### 'PortSweeping' alert removed from network layer alerts
-
-The following alert was removed from our network layer alerts due to inefficiencies:
-
-| Alert (alert type) | Description | MITRE tactics | Severity |
-||-|:--:||
-| **Possible outgoing port scanning activity detected**<br>(PortSweeping) | Network traffic analysis detected suspicious outgoing traffic from %{Compromised Host}. This traffic may be a result of a port scanning activity. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). If this behavior is intentional, please note that performing port scanning is against Azure Terms of service. If this behavior is unintentional, it may mean your resource has been compromised. | Discovery | Medium |
----
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Security recommendations in Microsoft Defender for Cloud
-description: This document walks you through how recommendations in Microsoft Defender for Cloud help you protect your Azure resources and stay in compliance with security policies.
- Previously updated : 05/11/2022
+ Title: Improving your security posture with recommendations in Microsoft Defender for Cloud
+description: This document walks you through how to identify security recommendations that will help you improve your security posture.
+ Last updated : 05/23/2022
-# Review your security recommendations
+# Find recommendations that can improve your security posture
-This article explains how to view and understand the recommendations in Microsoft Defender for Cloud to help you protect your multicloud resources.
+To improve your [secure score](secure-score-security-controls.md), you have to implement the security recommendations for your environment. From the list of recommendations, you can use filters to find the recommendations that have the most impact on your score, or the ones that you were assigned to implement.
-## View your recommendations <a name="monitor-recommendations"></a>
+To get to the list of recommendations:
-Defender for Cloud analyzes the security state of your resources to identify potential vulnerabilities.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Either:
+ - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve.
+ - Go to **Recommendations** in the Defender for Cloud menu.
-**To view your Secure score recommendations**:
+You can search for specific recommendations by name. Use the search box and filters above the list of recommendations to find specific recommendations, and look at the [details of the recommendation](security-policy-concept.md#security-recommendation-details) to decide whether to [remediate it](implement-security-recommendations.md), [exempt resources](exempt-resource.md), or [disable the recommendation](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations).
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a>
-1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
-
- :::image type="content" source="media/review-security-recommendations/recommendations-view.png" alt-text="Screenshot of the recommendations page.":::
-
- Here you'll see the recommendations applicable to your environment(s). Recommendations are grouped into security controls.
-
-1. Select **Secure score recommendations**.
-
- :::image type="content" source="media/review-security-recommendations/secure-score-recommendations.png" alt-text="Screenshot showing the location of the secure score recommendations tab.":::
-
- > [!NOTE]
- > Custom recommendations can be found under the All recommendations tab. Learn how to [Create custom security initiatives and policies](custom-security-policies.md).
-
- Secure score recommendations affect the secure score and are mapped to the various security controls. The All recommendations tab, allows you to see all of the recommendations including recommendations that are part of different regulatory compliance standards.
-
-1. (Optional) Select a relevant environment(s).
-
- :::image type="content" source="media/review-security-recommendations/environment-filter.png" alt-text="Screenshot of the environment filter, to select your filters.":::
-
-1. Select the :::image type="icon" source="media/review-security-recommendations/drop-down-arrow.png" border="false"::: to expand the control, and view a list of recommendations.
-
- :::image type="content" source="media/review-security-recommendations/list-recommendations.png" alt-text="Screenshot showing how to see the full list of recommendations by selecting the drop-down menu icon." lightbox="media/review-security-recommendations/list-recommendations-expanded.png":::
-
-1. Select a specific recommendation to view the recommendation details page.
-
- :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Screenshot of the recommendation details page." lightbox="./media/review-security-recommendations/recommendation-details-page-expanded.png":::
+Your [secure score is calculated](secure-score-security-controls.md?branch=main#how-your-secure-score-is-calculated) based on the security recommendations that you have implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md).
- 1. For supported recommendations, the top toolbar shows any or all of the following buttons:
- - **Enforce** and **Deny** (see [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)).
- - **View policy definition** to go directly to the Azure Policy entry for the underlying policy.
- - **Open query** - All recommendations have the option to view the detailed information about the affected resources using Azure Resource Graph Explorer.
- 1. **Severity indicator**.
- 1. **Freshness interval** (where relevant).
- 1. **Count of exempted resources** if exemptions exist for a recommendation, this shows the number of resources that have been exempted with a link to view the specific resources.
- 1. **Mapping to MITRE ATT&CK ® tactics and techniques** if a recommendation has defined tactics and techniques, select the icon for links to the relevant pages on MITRE's site. This applies only to Azure scored recommendations.
+The list of recommendations shows the **Potential score increase** that you can achieve when you remediate all of the recommendations in the security control.
- :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
+To find recommendations that can improve your secure score:
- 1. **Description** - A short description of the security issue.
- 1. When relevant, the details page also includes a table of **related recommendations**:
+1. In the list of recommendations, use the **Potential score increase** to identify the security control that contains recommendations that will increase your secure score.
+ - You can also use the search box and filters above the list of recommendations to find specific recommendations.
+1. Open a security control to see the recommendations that have unhealthy resources.
- The relationship types are:
+When you [remediate](implement-security-recommendations.md) all of the recommendations in the security control, your secure score increases by the percentage points listed for the control.
- - **Prerequisite** - A recommendation that must be completed before the selected recommendation
- - **Alternative** - A different recommendation, which provides another way of achieving the goals of the selected recommendation
- - **Dependent** - A recommendation for which the selected recommendation is a prerequisite
+## Manage the owner and ETA of recommendations that are assigned to you
- For each related recommendation, the number of unhealthy resources is shown in the "Affected resources" column.
+[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you are accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
- > [!TIP]
- > If a related recommendation is grayed out, its dependency isn't yet completed and so isn't available.
+Recommendations are listed as **On time** until their due date is passed, when they are changed to **Overdue**. Before the recommendation is overdue, the recommendation does not impact the secure score. The security team can also apply a grace period during which overdue recommendations continue to not impact the secure score.
- 1. **Remediation steps** - A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option**, you can select **View remediation logic** before applying the suggested fix to your resources.
+To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource.
- 1. **Affected resources** - Your resources are grouped into tabs:
- - **Healthy resources** ΓÇô Relevant resources, which either aren't impacted or on which you've already remediated the issue.
- - **Unhealthy resources** ΓÇô Resources that are still impacted by the identified issue.
- - **Not applicable resources** ΓÇô Resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
- :::image type="content" source="./media/review-security-recommendations/recommendations-not-applicable-reasons.png" alt-text="Not applicable resources with reasons.":::
- 1. Action buttons to remediate the recommendation or trigger a logic app.
+To change the owner of resources and set the ETA for remediation of recommendations that are assigned to you:
-## Search for a recommendation
+1. In the filters for list of recommendations, select **Show my items only**.
-You can search for specific recommendations by name. The search box and filters above the list of recommendations can be used to help locate a specific recommendation.
+ - The status column indicates the recommendations that are on time, overdue, or completed.
+ - The insights column indicates the recommendations that are in a grace period, so they currently do not impact your secure score until they become overdue.
-Custom recommendations only appear under the All recommendations tab.
+1. Select an on time or overdue recommendation.
+1. For the resources that are assigned to you, set the owner of the resource:
+ 1. Select the resources that are owned by another person, and select **Change owner and set ETA**.
+ 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**.
+ The owner of the resource gets a weekly email listing the recommendations that they are assigned to.
+1. For resources that you own, set an ETA for remediation:
+ 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**.
+ 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources.
+ 1. Enter a justification for the remediation by that date, and select **Save**.
-**To search for recommendations**:
-
-1. On the recommendation page, select an environment from the environment filter.
-
- :::image type="content" source="media/review-security-recommendations/environment-filter.png" alt-text="Screenshot of the environmental filter on the recommendation page.":::
-
- You can select 1, 2, or all options at a time. The page's results will automatically reflect your choice.
-
-1. Enter a name in the search box, or select one of the available filters.
-
- :::image type="content" source="media/review-security-recommendations/search-filters.png" alt-text="Screenshot of the search box and filter list.":::
-
-1. Select :::image type="icon" source="media/review-security-recommendations/add-filter.png" border="false"::: to add more filter(s).
-
-1. Select a filter from the drop-down menu.
-
- :::image type="content" source="media/review-security-recommendations/filter-drop-down.png" alt-text="Screenshot of the available filters to select.":::
-
-1. Select a value from the drop-down menu.
-
-1. Select **OK**.
+The due date for the recommendation does not change, but the security team can see that you plan to update the resources by the specified ETA date.
## Review recommendation data in Azure Resource Graph Explorer (ARG)
-You can review recommendations in ARG both on the recommendations page or on an individual recommendation.
+You can review recommendations in ARG both on the recommendations page or on an individual recommendation.
The toolbar on the recommendation details page includes an **Open query** button to explore the details in [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml), an Azure service that gives you the ability to query - across multiple subscriptions - Defender for Cloud's security posture data.
For example, this recommendation details page shows 15 affected resources:
:::image type="content" source="./media/review-security-recommendations/open-query.png" alt-text="The **Open Query** button on the recommendation details page.":::
-When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same 15 resources and their health status for this recommendation:
+When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same 15 resources and their health status for this recommendation:
:::image type="content" source="./media/review-security-recommendations/run-query.png" alt-text="Azure Resource Graph Explorer showing the results for the recommendation shown in the previous screenshot.":::
Recommendations that aren't included in the calculations of your secure score, s
Recommendations can be downloaded to a CSV report from the Recommendations page.
-**To download a CSV report of your recommendations**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
+To download a CSV report of your recommendations:
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.- 1. Select **Download CSV report**. :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from.":::
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Title: Understanding security policies, initiatives, and recommendations in Microsoft Defender for Cloud description: Learn about security policies, initiatives, and recommendations in Microsoft Defender for Cloud. Previously updated : 11/09/2021 Last updated : 06/06/2022 # What are security policies, initiatives, and recommendations? Microsoft Defender for Cloud applies security initiatives to your subscriptions. These initiatives contain one or more security policies. Each of those policies results in a security recommendation for improving your security posture. This page explains each of these ideas in detail. - ## What is a security policy? An Azure Policy definition, created in Azure Policy, is a rule about specific security conditions that you want controlled. Built in definitions include things like controlling what type of resources can be deployed or enforcing the use of tags on all resources. You can also create your own custom policy definitions.
Defender for Cloud offers the following options for working with security initia
Using the policies, Defender for Cloud periodically analyzes the compliance status of your resources to identify potential security misconfigurations and weaknesses. It then provides you with recommendations on how to remediate those issues. Recommendations are the result of assessing your resources against the relevant policies and identifying resources that aren't meeting your defined requirements.
-Defender for Cloud makes its security recommendations based on your chosen initiatives. When a policy from your initiative is compared against your resources and finds one or more that aren't compliant it is presented as a recommendation in Defender for Cloud.
+Defender for Cloud makes its security recommendations based on your chosen initiatives. When a policy from your initiative is compared against your resources and finds one or more that aren't compliant, it is presented as a recommendation in Defender for Cloud.
Recommendations are actions for you to take to secure and harden your resources. Each recommendation provides you with the following information:
In practice, it works like this:
For example, Azure Storage accounts must restrict network access to reduce their attack surface.
-1. The initiative includes multiple ***policies***, each with a requirement of a specific resource type. These policies enforce the requirements in the initiative.
+1. The initiative includes multiple ***policies***, each with a requirement of a specific resource type. These policies enforce the requirements in the initiative.
To continue the example, the storage requirement is enforced with the policy "Storage accounts should restrict network access using virtual network rules". 1. Microsoft Defender for Cloud continually assesses your connected subscriptions. If it finds a resource that doesn't satisfy a policy, it displays a ***recommendation*** to fix that situation and harden the security of resources that aren't meeting your security requirements.
- So, for example, if an Azure Storage account on any of your protected subscriptions isn't protected with virtual network rules, you'll see the recommendation to harden those resources.
+ So, for example, if an Azure Storage account on any of your protected subscriptions isn't protected with virtual network rules, you'll see the recommendation to harden those resources.
So, (1) an initiative includes (2) policies that generate (3) environment-specific recommendations.
+### Security recommendation details
+
+Security recommendations contain details that help you understand its significance and how to handle it.
++
+The recommendation details shown are:
+
+1. For supported recommendations, the top toolbar shows any or all of the following buttons:
+ - **Enforce** and **Deny** (see [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)).
+ - **View policy definition** to go directly to the Azure Policy entry for the underlying policy.
+ - **Open query** - You can view the detailed information about the affected resources using Azure Resource Graph Explorer.
+1. **Severity indicator**
+1. **Freshness interval**
+1. **Count of exempted resources** if exemptions exist for a recommendation, this shows the number of resources that have been exempted with a link to view the specific resources.
+1. **Mapping to MITRE ATT&CK ® tactics and techniques** if a recommendation has defined tactics and techniques, select the icon for links to the relevant pages on MITRE's site. This applies only to Azure scored recommendations.
+
+ :::image type="content" source="media/review-security-recommendations/tactics-window.png" alt-text="Screenshot of the MITRE tactics mapping for a recommendation.":::
+
+1. **Description** - A short description of the security issue.
+1. When relevant, the details page also includes a table of **related recommendations**:
+
+ The relationship types are:
+
+ - **Prerequisite** - A recommendation that must be completed before the selected recommendation
+ - **Alternative** - A different recommendation, which provides another way of achieving the goals of the selected recommendation
+ - **Dependent** - A recommendation for which the selected recommendation is a prerequisite
+
+ For each related recommendation, the number of unhealthy resources is shown in the "Affected resources" column.
+
+ > [!TIP]
+ > If a related recommendation is grayed out, its dependency isn't yet completed and so isn't available.
+
+1. **Remediation steps** - A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option, you can select**View remediation logic** before applying the suggested fix to your resources.
+
+1. **Affected resources** - Your resources are grouped into tabs:
+ - **Healthy resources** ΓÇô Relevant resources, which either aren't impacted or on which you've already remediated the issue.
+ - **Unhealthy resources** ΓÇô Resources that are still impacted by the identified issue.
+ - **Not applicable resources** ΓÇô Resources for which the recommendation can't give a definitive answer. The not applicable tab also includes reasons for each resource.
+
+ :::image type="content" source="./media/review-security-recommendations/recommendations-not-applicable-reasons.png" alt-text="Screenshot of resources for which the recommendation can't give a definitive answer.":::
+
+1. Action buttons to remediate the recommendation or trigger a logic app.
+ ## Viewing the relationship between a recommendation and a policy As mentioned above, Defender for Cloud's built in recommendations are based on the Azure Security Benchmark. Almost every recommendation has an underlying policy that is derived from a requirement in the benchmark.
When you're reviewing the details of a recommendation, it's often helpful to be
:::image type="content" source="media/release-notes/view-policy-definition.png" alt-text="Link to Azure Policy page for the specific policy supporting a recommendation.":::
-Use this link to view the policy definition and review the evaluation logic.
+Use this link to view the policy definition and review the evaluation logic.
If you're reviewing the list of recommendations on our [Security recommendations reference guide](recommendations-reference.md), you'll also see links to the policy definition pages: :::image type="content" source="media/release-notes/view-policy-definition-from-documentation.png" alt-text="Accessing the Azure Policy page for a specific policy directly from the Microsoft Defender for Cloud recommendations reference page."::: - ## Next steps This page explained, at a high level, the basic concepts and relationships between policies, initiatives, and recommendations. For related information, see:
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
The following image shows a sample of the HPE ProLiant DL20 back panel:
| Quantity | PN| Description: high end | |--|--|--| |1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
-|1| P06963-B21 | HPE DL20 Gen10 4SFF CTO Server |
|1| P17104-L21 | HPE DL20 Gen10 E-2234 FIO Kit | |2| 879507-B21 | HPE 16-GB 2Rx8 PC4-2666V-E STND Kit | |3| 655710-B21 | HPE 1-TB SATA 7.2 K SFF SC DS HDD |
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
In contrast, when working with locally managed sensors:
- Sensor names can be updated in the sensor console.
+### Devices monitored by Defender for IoT
++ ## Analytics engines Defender for IoT sensors apply analytics engines on ingested data, triggering alerts based on both real-time and pre-recorded traffic.
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
You can view device information from connected sensors by using the *device inventory* in the on-premises management console. This feature gives you a comprehensive view of all network information. Use import, export, and filtering tools to manage this information. The status information about the connected sensor versions also appears.
+For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
+
+## View the device inventory from an on-premises management console
+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/device-inventory-data-table.png" alt-text="Screenshot of the device inventory data table."::: The following table describes the table columns in the device inventory.
The following table describes the table columns in the device inventory.
| **Discovered** | When this device was first seen in the network. | | **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. |
-## What is an Inventory device?
-
-The Defender for IoT Device Inventory displays an extensive range of device attributes that are detected by sensors monitoring organizational networks and managed endpoints. Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
-
-1. Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)
-1. Devices composed of multiple backplane components (including all racks/slots/modules)
-1. Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-
-Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices.
-Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.
-
-## Integrate data into the device inventory
+## Integrate data into the enterprise device inventory
Data integration capabilities let you enhance the data in the device inventory with information from other resources. These sources include CMDBs, DNS, firewalls, and Web APIs.
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
Options are available to:
- Import Windows registry details. - Create groups for display in the device map.
-
-## What is an inventory device?
-
-The Defender for IoT Device inventory displays an extensive range of asset attributes that are detected by sensors monitoring the organization's networks and managed endpoints.
-
-Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
--- Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)-- Devices composed of multiple backplane components (including all racks/slots/modules)-- Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices.
-Devices that have been inactive for more than 60 days are classified as inactive inventory devices.
+For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
## View device attributes in the inventory
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Before you subscribe, you should have a sense of how many devices you would like
Users can also work with trial subscription, which supports monitoring a limited number of devices for 30 days. See [Microsoft Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/) information on committed device prices.
+### What's a device?
++ ## Requirements Before you onboard a subscription, verify that:
If you already have access to an Azure subscription, but it isn't listed when su
Azure **Subscription Owners** and **Subscription Contributor**s can onboard, update, and offboard Microsoft Defender for IoT subscriptions.
+### Calculate the number of devices you need to monitor
+
+When onboarding or editing your Defender for IoT plan, you'll need to know how many devices you want to monitor.
+
+**To calculate the number of devices you need to monitor**:
+
+Collect the total number of devices in your network and remove:
+
+- **Duplicate devices that have the same IP or MAC address**. When detected, the duplicates are automatically removed by Defender for IoT.
+
+- **Duplicate devices that have the same ID**. These are the same devices, seen by the same sensor, with different field values. For such devices, check the last time each device had activity and use the latest device only.
+
+- **Inactive devices**, with no traffic for more than 60 days.
+
+- **Broadcast / multicast devices**. These represent unique addresses but not unique devices.
+
+For more information, see [What's a device?](#whats-a-device)
+ ## Onboard a trial subscription If you would like to evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports 1000 committed devices. Using the trial lets you deploy one or more Defender for IoT sensors on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to download an on-premises management console to view aggregated information generated by sensors.
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
# About high availability
-Increase the resiliency of your Defender for IoT deployment by installing an on-premises management console high availability appliance. High availability deployments ensure your managed sensors continuously report to an active on-premises management console.
+Increase the resiliency of your Defender for IoT deployment by configuring high availability on your on-premises management console. High availability deployments ensure your managed sensors continuously report to an active on-premises management console.
This deployment is implemented with an on-premises management console pair that includes a primary and secondary appliance.
+> [!NOTE]
+> In this document, the principal on-premises management console is referred to as the primary, and the agent is referred to as the secondary.
+ ## About primary and secondary communication When a primary and secondary on-premises management console is paired:
When a primary and secondary on-premises management console is paired:
When validation is `ON`, the appliance should be able to establish connection to the CRL server defined by the certificate. -- The primary on-premises management console data is automatically backed up to the secondary on-premises management console every 10 minutes. The on-premises management console configurations and device data are backed up. PCAP files and logs are not included in the backup. You can back up and restore of PCAPs and logs manually.
+- The primary on-premises management console data is automatically backed up to the secondary on-premises management console every 10 minutes. The on-premises management console configurations and device data are backed up. PCAP files and logs are not included in the backup. You can back up and restore PCAPs and logs manually.
-- The primary setup at the management console is duplicated on the secondary; for example, system settings. If these settings are updated on the primary, they're also updated on the secondary.
+- The primary setup on the management console is duplicated on the secondary. For example, if the system settings are updated on the primary, they're also updated on the secondary.
- Before the license of the secondary expires, you should define it as the primary in order to update the license.
When a primary and secondary on-premises management console is paired:
If a sensor can't connect to the primary on-premises management console, it automatically connects to the secondary. Your system will be supported by both the primary and secondary simultaneously, if less than half of the sensors are communicating with the secondary. The secondary takes over when more than half of the sensors are communicating with it. Fail over from the primary to the secondary takes approximately three minutes. When the failover occurs, the primary on-premises management console freezes. When this happens, you can sign in to the secondary using the same sign-in credentials.
-During failover, sensors continue attempting to communicate with the primary appliance. When more than half the managed sensors succeed to communicate with the primary, the primary is restored. The following message appears at the secondary console when the primary is restored.
+During failover, sensors continue attempting to communicate with the primary appliance. When more than half the managed sensors succeed to communicate with the primary, the primary is restored. The following message appears on the secondary console when the primary is restored:
:::image type="content" source="media/how-to-set-up-high-availability/secondary-console-message.png" alt-text="Screenshot of a message that appears at the secondary console when the primary is restored.":::
The installation and configuration procedures are performed in four main stages:
1. Install an on-premises management console primary appliance.
-1. Configure the on-premises management console primary appliance. For example, scheduled backup settings, VLAN settings. See the on-premises management console user guide for details. All settings are applied to the secondary appliance automatically after pairing.
+1. Configure the on-premises management console primary appliance. For example, scheduled backup settings, VLAN settings. For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). All settings are applied to the secondary appliance automatically after pairing.
1. Install an on-premises management console secondary appliance. For more information, see [About the Defender for IoT Installation](how-to-install-software.md).
The installation and configuration procedures are performed in four main stages:
Verify that you've met the following high availability requirements: -- Certificate requirements
+- [Certificate requirements](how-to-manage-the-on-premises-management-console.md#manage-certificates)
- Software and hardware requirements
Verify that you've met the following high availability requirements:
### Network access requirements
-Verify if your organizational security policy allows you to hav access to the following services on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console:
+Verify if your organizational security policy allows you to have access to the following services on the primary and secondary on-premises management console. These services also allow the connection between the sensors and secondary on-premises management console:
|Port|Service|Description| |-|-|--|
Verify that both the primary and secondary on-premises management console applia
sudo cyberx-management-trusted-hosts-add -ip <Secondary IP> -token <connection string> ```
- >[!NOTE]
- > In this document, the principal on-premises management console is referred to as the primary, and the agent is referred to as the secondary.
1. Enter the IP address of the secondary appliance in the ```<Secondary ip>``` field and select Enter. The IP address is then validated, and the SSL certificate is downloaded to the primary. Entering the IP address also associates the sensors to the secondary appliance.
The core application logs can be exported to the Defender for IoT support team t
## Update the on-premises management console with high availability
-Perform the high availability update in the following order. Make sure each step is complete before you begin a new step.
+To update an on-premises management console that has high availability configured, you will need to:
+
+1. Disconnect the high availability from both the primary and secondary appliances.
+1. Update the appliances to the new version.
+1. Reconfigure the high availability back onto both appliances.
+
+Perform the update in the following order. Make sure each step is complete before you begin a new step.
+
+**To update an on-premises management console with high availability configured**:
+
+1. Disconnect the high availability from both the primary and secondary appliances:
+
+ **On the primary:**
+
+ 1. Get the list of the currently connected appliances. Run:
+
+ ```bash
+ cyberx-management-trusted-hosts-list
+ ```
+
+ 1. Find the domain associated with the secondary appliance and copy it to your clipboard. For example:
+
+ :::image type="content" source="media/how-to-set-up-high-availability/update-high-availability-domain.jpg" alt-text="Screenshot showing the domain associated with the secondary appliance.":::
+
+ 1. Remove the secondary domain from the list of trusted hosts. Run:
+
+ ```bash
+ sudo cyberx-management-trusted-hosts-remove -d [Secondary domain]
+ ```
+
+ 1. Verify that the certificate is installed correctly. Run:
+
+ ```bash
+ sudo cyberx-management-trusted-hosts-apply
+ ```
+
+ **On the secondary:**
+
+ 1. Get the list of the currently connected appliances. Run:
+
+ ```bash
+ cyberx-management-trusted-hosts-list
+ ```
+
+ 1. Find the domain associated with the primary appliance and copy it to your clipboard.
-**To update with high availability**:
+ 1. Remove the primary <!--original text said secondary, I think it's a mistake--> domain from the list of trusted hosts. Run:
+
+ ```bash
+ sudo cyberx-management-trusted-hosts-remove -d [Primary domain]
+ ```
+
+ 1. Verify that the certificate is installed correctly. Run:
+
+ ```bash
+ sudo cyberx-management-trusted-hosts-apply
+ ```
-1. Update the primary on-premises management console.
+1. Update both the primary and secondary appliances to the new version. For more information, see [Update the software version](how-to-manage-the-on-premises-management-console.md#update-the-software-version).
-1. Update the secondary on-premises management console.
+1. Set up high availability again, on both the primary and secondary appliances. For more information, see [Create the primary and secondary pair](#create-the-primary-and-secondary-pair).
-1. Update the sensors.
## Next steps
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
This glossary provides a brief description of important terms and concepts for t
|--|--|--| | **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** | | **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** |
-| **Inventory device** | Defender for IoT will identify and classify devices as a single unique network device in the inventory for:<br><br>- Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)<br>- Devices composed of multiple backplane components (including all racks/slots/modules)<br>- Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs). <br><br>Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
+| **Device inventories** | Defender for IoT considers any of the following as single and unique network devices:<br><br>- Managed or un-managed standalone IT/OT/IoT devices, with one or more NICs<br>- Devices with multiple backplane components, including all racks, slots, or modules<br>- Devices that provide network infrastructure, such as switches or routers with multiple NICs<br><br>Monitored devices are listed in the **Device inventory** pages on the Azure portal, sensor console, and the on-premises management console. Data integration features let you enhance device data with details from other enterprise resources, such as CMDBs, DNS, firewalls, and Web APIs. <br><br>The following items are not monitored as devices, and do not appear in the Defender for IoT device inventories: <br>- Public internet IP addresses<br>- Multi-cast groups<br>- Broadcast groups<br><br>Devices that are inactive for more than 60 days are classified as *inactive* inventory devices.<br>The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs.| [**Device map**](#d)|
| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** |
-| **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:<br /><br />- Filter displayed information.<br /><br />- Export this information to a CSV file.<br /><br />- Import Windows registry details. | **[Group](#g)** <br /><br />**[Device inventory- on-premises management console](#d)** |
-| **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)<br /><br />[Device inventory - data integrator](#d)** |
-| **Device inventory - data integrator** | The data integration capabilities of the on-premises management console let you enhance the data in the device inventory with information from other enterprise resources. Example resources are CMDBs, DNS, firewalls, and Web APIs. | **[Device inventory - on-premises management console](#d)** |
## E
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
Next, create the Kusto cluster. The command below requires 5-10 minutes to execu
az kusto cluster create --cluster-name $clustername --sku name="Dev(No SLA)_Standard_E2a_v4" tier="Basic" --resource-group $resourcegroup --location $location --type SystemAssigned ```
-Create a database in your new Kusto cluster (using the cluster name from above and in the same location). This database will be used to store contextualized Azure Digital Twins data. The command below creates a database with a soft delete period of 365 days, and a hot cache period of 31 days. For more information about the options available for this command, see [az kusto database create](/cli/azure/kusto/database?view=azure-cli-latest&preserve-view=true#az_kusto_database_create).
+Create a database in your new Kusto cluster (using the cluster name from above and in the same location). This database will be used to store contextualized Azure Digital Twins data. The command below creates a database with a soft delete period of 365 days, and a hot cache period of 31 days. For more information about the options available for this command, see [az kusto database create](/cli/azure/kusto/database?view=azure-cli-latest&preserve-view=true#az-kusto-database-create).
```azurecli-interactive az kusto database create --cluster-name $clustername --database-name $databasename --resource-group $resourcegroup --read-write-database soft-delete-period=P365D hot-cache-period=P31D location=$location
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
Title: Tutorial - Create custom Azure DNS records for a web app
-description: In this tutorial you create custom domain DNS records for web app using Azure DNS.
+ Title: 'Tutorial: Create custom Azure DNS records for a web app'
+description: In this tutorial, you create custom domain DNS records for web app using Azure DNS.
Previously updated : 10/20/2020 Last updated : 06/06/2022 #Customer intent: As an experienced network administrator, I want to create DNS records in Azure DNS, so I can host a web app in a custom domain.
You can configure Azure DNS to host a custom domain for your web apps. For example, you can create an Azure web app and have your users access it using either www\.contoso.com or contoso.com as a fully qualified domain name (FQDN).
-> [!NOTE]
-> Contoso.com is used as an example throughout this tutorial. Substitute your own domain name for contoso.com.
- To do this, you have to create three records: * A root "A" record pointing to contoso.com * A root "TXT" record for verification * A "CNAME" record for the www name that points to the A record
-Keep in mind that if you create an A record for a web app in Azure, the A record must be manually updated if the underlying IP address for the web app changes.
+> [!NOTE]
+> Contoso.com is used as an example throughout this tutorial. Substitute your own domain name for contoso.com.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Prerequisites
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure account with an active subscription. If you donΓÇÖt have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A domain name that you can host in Azure DNS. You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-* You must have a domain name available to test with that you can host in Azure DNS . You must have full control of this domain. Full control includes the ability to set the name server (NS) records for the domain.
-* [Create an App Service app](../app-service/quickstart-html.md), or use an app that you created for another tutorial.
+* A web app. If you don't have one, you can [create a static HTML web app](../app-service/quickstart-html.md) for this tutorial.
-* Create a DNS zone in Azure DNS, and delegate the zone in your registrar to Azure DNS.
+* An Azure DNS zone with delegation in your registrar to Azure DNS. If you don't have one, you can [create a DNS zone](./dns-getstarted-powershell.md), then [delegate your domain](dns-delegate-domain-azure-dns.md#delegate-the-domain) to Azure DNS.
- 1. To create a DNS zone, follow the steps in [Create a DNS zone](./dns-getstarted-powershell.md).
- 2. To delegate your zone to Azure DNS, follow the steps in [DNS domain delegation](dns-delegate-domain-azure-dns.md).
-After creating a zone and delegating it to Azure DNS, you can then create records for your custom domain.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-## Create an A record and TXT record
+## Create the A record
An A record is used to map a name to its IP address. In the following example, assign "\@" as an A record using your web app IPv4 address. \@ typically represents the root domain. ### Get the IPv4 address
-In the left navigation of the App Services page in the Azure portal, select **Custom domains**.
-
-![Custom domain menu](../app-service/./media/app-service-web-tutorial-custom-domain/custom-domain-menu.png)
+In the left navigation of the App Services page in the Azure portal, select **Custom domains**, then copy the IP address of your web app:
-In the **Custom domains** page, copy the app's IPv4 address:
-![Portal navigation to Azure app](../app-service/./media/app-service-web-tutorial-custom-domain/mapping-information.png)
+### Create the record
-### Create the A record
+To create the A record, use:
```azurepowershell New-AzDnsRecordSet -Name "@" -RecordType "A" -ZoneName "contoso.com" `
New-AzDnsRecordSet -Name "@" -RecordType "A" -ZoneName "contoso.com" `
-DnsRecords (New-AzDnsRecordConfig -IPv4Address "<your web app IP address>") ```
-### Create the TXT record
+> [!IMPORTANT]
+> The A record must be manually updated if the underlying IP address for the web app changes.
+
+## Create the TXT record
App Services uses this record only at configuration time to verify that you own the custom domain. You can delete this TXT record after your custom domain is validated and configured in App Service. > [!NOTE] > If you want to verify the domain name, but not route production traffic to the web app, you only need to specify the TXT record for the verification step. Verification does not require an A or CNAME record in addition to the TXT record.
+To create the TXT record, use:
+ ```azurepowershell New-AzDnsRecordSet -ZoneName contoso.com -ResourceGroupName MyAzureResourceGroup ` -Name "@" -RecordType "txt" -Ttl 600 `
New-AzDnsRecordSet -ZoneName contoso.com -ResourceGroupName MyAzureResourceGroup
## Create the CNAME record
-If your domain is already managed by Azure DNS (see [DNS domain delegation](dns-domain-delegation.md), you can use the following example to create a CNAME record for contoso.azurewebsites.net.
-
-Open Azure PowerShell and create a new CNAME record. This example creates a record set type CNAME with a "time to live" of 600 seconds in DNS zone named "contoso.com" with the alias for the web app contoso.azurewebsites.net.
-
-### Create the record
+If your domain is already managed by Azure DNS (see [DNS domain delegation](dns-domain-delegation.md)), you can use the following example to create a CNAME record for contoso.azurewebsites.net. The CNAME created in this example has a "time to live" of 600 seconds in DNS zone named "contoso.com" with the alias for the web app contoso.azurewebsites.net.
```azurepowershell New-AzDnsRecordSet -ZoneName contoso.com -ResourceGroupName "MyAzureResourceGroup" `
The following example is the response:
``` Name : www ZoneName : contoso.com
- ResourceGroupName : myresourcegroup
+ ResourceGroupName : myazureresourcegroup
Ttl : 600 Etag : 8baceeb9-4c2c-4608-a22c-229923ee185 RecordType : CNAME
contoso.com text =
``` ## Add custom host names
-Now you can add the custom host names to your web app:
+Now, you can add the custom host names to your web app:
```azurepowershell set-AzWebApp ` -Name contoso `
- -ResourceGroupName MyAzureResourceGroup `
+ -ResourceGroupName <your web app resource group> `
-HostNames @("contoso.com","www.contoso.com","contoso.azurewebsites.net") ``` ## Test the custom host names
-Open a browser and browse to `http://www.<your domainname>` and `http://<you domain name>`.
+Open a browser and browse to `http://www.<your domain name>` and `http://<you domain name>`.
> [!NOTE] > Make sure you include the `http://` prefix, otherwise your browser may attempt to predict a URL for you! You should see the same page for both URLs. For example:
-![Contoso app service](media/dns-web-sites-custom-domain/contoso-app-svc.png)
- ## Clean up resources
-When you no longer need the resources created in this tutorial, you can delete the **myresourcegroup** resource group.
+When no longer needed, you can delete all resources created in this tutorial by deleting the resource group **MyAzureResourceGroup**:
+
+1. From the left-hand menu, select **Resource groups**.
+
+2. Select the **MyAzureResourceGroup** resource group.
+
+3. Select **Delete resource group**.
+
+4. Enter *MyAzureResourceGroup* and select **Delete**.
## Next steps
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
A verified partner is a partner organization whose identity has been validated b
Customers authorize you to create partner topics or partner destinations on their Azure subscription. The authorization is granted for a given resource group in a customer Azure subscription and it's time bound. You must create the channel before the expiration date set by the customer. You should have documentation suggesting the customer an adequate window of time for configuring your system to send or receive events and to create the channel before the authorization expires. If you attempt to create a channel without authorization or after it has expired, the channel creation will fail and no resource will be created on the customer's Azure subscription. > [!NOTE]
-> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 30th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
+> Event Grid will start **requiring authorizations to create partner topics or partner destinations** around June 15th, 2022. You should update your documentation asking your customers to grant you the authorization before you attempt to create a channel or an event channel.
>[!IMPORTANT] > **A verified partner is not an authorized partner**. Even if a partner has been vetted by Microsoft, you still need to be authorized before you can create a partner topic or partner destination on the customer's Azure subscription.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 30th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 30th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
+> Event Grid will start requiring authorizations to create partner topics or partner destinations around June 15th, 2022. Meanwhile, requiring your (subscriber's) authorization for a partner to create resources on your Azure subscription is an **optional** feature. We encourage you to opt-in to use this feature and try to use it in non-production Azure subscriptions before it becomes a mandatory step around June 15th, 2022. To opt-in to this feature, reach out to [mailto:askgrid@microsoft.com](mailto:askgrid@microsoft.com) using the subject line **Request to enforce partner authorization on my Azure subscription(s)** and provide your Azure subscription(s) in the email.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-erdirect.md
Before using ExpressRoute Direct, you must first enroll your subscription. To en
Select-AzSubscription -Subscription "<SubscriptionID or SubscriptionName>" ```
-2. Register your subscription for Public Preview using the following command:
+2. Register your subscription using the following command:
```azurepowershell-interactive Register-AzProviderFeature -FeatureName AllowExpressRoutePorts -ProviderNamespace Microsoft.Network ```
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Previously updated : 07/30/2021 Last updated : 06/06/2022
Azure Firewall can be configured during deployment to span multiple Availability
You can also associate Azure Firewall to a specific zone just for proximity reasons, using the service standard 99.95% SLA.
-There's no additional cost for a firewall deployed in an Availability Zone. However, there are added costs for inbound and outbound data transfers associated with Availability Zones. For more information, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/).
+There's no additional cost for a firewall deployed in more than one Availability Zone. However, there are added costs for inbound and outbound data transfers associated with Availability Zones. For more information, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/).
Azure Firewall Availability Zones are available in regions that support Availability Zones. For more information, see [Regions that support Availability Zones in Azure](../availability-zones/az-region.md)
firewall Tutorial Firewall Dnat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md
Previously updated : 04/29/2021 Last updated : 06/06/2022 #Customer intent: As an administrator, I want to deploy and configure Azure Firewall DNAT so that I can control inbound Internet access to resources located in a subnet.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a resource group 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal home page, select **Resource groups**, then select **Add**.
+2. On the Azure portal home page, select **Resource groups**, then select **Create**.
4. For **Subscription**, select your subscription.
-1. For **Resource group name**, type **RG-DNAT-Test**.
+1. For **Resource group**, type **RG-DNAT-Test**.
5. For **Region**, select a region. All other resources that you create must be in the same region. 6. Select **Review + create**. 1. Select **Create**.
First, create the VNets and then peer them.
1. From the Azure portal home page, select **All services**. 2. Under **Networking**, select **Virtual networks**.
-3. Select **Add**.
+3. Select **Create**.
7. For **Resource group**, select **RG-DNAT-Test**. 1. For **Name**, type **VN-Hub**. 1. For **Region**, select the same region that you used before. 1. Select **Next: IP Addresses**. 1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
-1. Under **Subnet name**, select default.
+1. Under **Subnet name**, select **default**.
1. Edit the **Subnet name** and type **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
First, create the VNets and then peer them.
1. From the Azure portal home page, select **All services**. 2. Under **Networking**, select **Virtual networks**.
-3. Select **Add**.
+3. Select **Create**.
1. For **Resource group**, select **RG-DNAT-Test**. 1. For **Name**, type **VN-Spoke**. 1. For **Region**, select the same region that you used before.
Now peer the two VNets.
Create a workload virtual machine, and place it in the **SN-Workload** subnet. 1. From the Azure portal menu, select **Create a resource**.
-2. Under **Popular**, select **Windows Server 2016 Datacenter**.
+2. Under **Popular**, select **Windows Server 2019 Datacenter**.
**Basics**
After deployment finishes, note the private IP address for the virtual machine.
|Resource group |Select **RG-DNAT-Test** | |Name |**FW-DNAT-test**| |Region |Select the same location that you used previously|
+ |Firewall tier|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: VN-Hub| |Public IP address |**Add new**, Name: **fw-pip**.|
For the **SN-Workload** subnet, you configure the outbound default route to go t
1. From the Azure portal home page, select **All services**. 2. Under **Networking**, select **Route tables**.
-3. Select **Add**.
+3. Select **Create**.
5. For **Subscription**, select your subscription. 1. For **Resource group**, select **RG-DNAT-Test**. 1. For **Region**, select the same region that you used previously.
For the **SN-Workload** subnet, you configure the outbound default route to go t
1. Select **OK**. 1. Select **Routes**, and then select **Add**. 1. For **Route name**, type **FW-DG**.
-1. For **Address prefix**, type **0.0.0.0/0**.
+1. For **Address prefix destination**, select **IP Addresses**.
+1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
1. For **Next hop type**, select **Virtual appliance**. Azure Firewall is actually a managed service, but virtual appliance works in this situation. 18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
-19. Select **OK**.
+19. Select **Add**.
## Configure a NAT rule
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
+
+ Title: Consume events with Logic Apps - Azure Health Data Services
+description: This article provides resources on how to consume events with Logic Apps.
+++++ Last updated : 05/26/2022+++
+# Consume events with Logic Apps
+
+This tutorial shows how to use Azure Logic Apps to process Azure Health Data Services FHIR events. Logic Apps create and run automated workflows to process event data from other applications. You will learn how to register a FHIR event with your Logic App, meet a specified event criteria, and perform a service operation.
+
+Here's an example of a Logic App workflow:
++
+The workflow is on the left and the trigger condition is on the right.
+
+## Overview
+
+Follow these steps to create a Logic App workflow to consume FHIR events:
+
+1. Set up prerequisites
+2. Create a Logic App
+3. Create a Logic App workflow
+
+## Prerequisites
+
+Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](./events-deploy-portal.md).
+
+## Creating a Logic App
+
+To set up an automated workflow, you must first create a Logic App. For more information about Logic Apps, see [What is Azure Logic Apps?](./../../logic-apps/logic-apps-overview.md)
+
+### Specify your Logic App details
+
+Follow these steps:
+
+1. Go to the Azure portal.
+2. Search for "Logic App".
+3. Click "Add".
+4. Specify Basic details.
+5. Specify Hosting.
+6. Specify Monitoring.
+7. Specify Tags.
+8. Review and create your Logic App.
+
+You now need to fill out the details of your Logic App. Specify information for these five categories. They are in separate tabs:
++
+- Tab 1 - Basics
+- Tab 2 - Hosting
+- Tab 3 - Monitoring
+- Tab 4 - Tags
+- Tab 5 - Review + Create
+
+### Basics - Tab 1
+
+Start by specifying the following basics:
+
+#### Project details
+
+- Subscription
+- Resource Group
+
+Select a current subscription and specify an existing or new resource group.
+
+#### Instance details
+
+- Logic App name
+- Publish type
+- Region
+
+Create a name for your Logic App. You must choose Workflow or Docker Container as your publishing type. Select a region that is compatible with your plan.
+
+#### Plan
+
+- Plan type
+- App Service Plan
+- Sku and size
+
+Choose a plan type (Standard or Consumption). Create a new Windows Plan name and specify the Sku and size.
+
+#### Zone redundancy
+
+- Zone redundancy deployment
+
+Enabling your plan will make it zone redundant.
+
+### Hosting - Tab 2
+
+Continue specifying your Logic App by clicking "Next: Hosting".
+
+#### Storage
+
+- Storage type
+- Storage account
+
+Choose the type of storage you want to use and the storage account. You can use Azure Storage or add SQL functionality. You must also create a new storage account or use an existing one.
+
+### Monitoring - Tab 3
+
+Continue specifying your Logic App by clicking "Next: Monitoring".
+
+#### Monitoring with Application Insights
+
+- Enable Application Insights
+- Application Insights
+- Region
+
+Enable Azure Monitor application insights to automatically monitor your application. If you enable insights, you must create a new insight and specify the region.
+
+### Tags - Tab 4
+
+Continue specifying your Logic App by clicking "Next: Tags".
+
+#### Use tags to categorize resources
+
+Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups.
+
+This example will not use tagging.
+
+### Review + create - Tab 5
+
+Finish specifying your Logic App by clicking "Next: Review + create".
+
+#### Review your Logic App
+
+Your proposed Logic app will display the following details:
+
+- Subscription
+- Resource Group
+- Logic App Name
+- Runtime stack
+- Hosting
+- Storage
+- Plan
+- Monitoring
+
+If you're satisfied with the proposed configuration, click "Create". If not, click "Previous" to go back and specify new details.
+
+First you'll see an alert telling you that deployment is initializing. Next you'll see a new page telling you that the deployment is in progress.
++
+If there are no errors, you will finally see a notification telling you that your deployment is complete.
++
+#### Your Logic App dashboard
+
+Azure creates a dashboard when your Logic App is complete. The dashboard will show you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard:
++
+You can do the following activities from your dashboard.
+
+- Browse
+- Refresh
+- Stop
+- Restart
+- Swap
+- Get Publish Profile
+- Reset Publish Profile
+- Delete
+
+## Creating a Logic App workflow
+
+When your Logic App is running, follow these steps to create a Logic App workflow:
+
+1. Initialize a workflow
+2. Configuring a workflow
+3. Designing a workflow
+4. Adding an action
+5. Giving FHIR Reader access
+6. Adding a condition
+7. Choosing a condition criteria
+8. Testing your condition
+
+### Initializing your workflow
+
+Before you begin, you'll need to have a Logic App configured and running correctly.
+
+Once your Logic App is running, you can create and configure a workflow. To initialize a workflow, follow these steps:
+
+1. Start at the Azure portal.
+2. Click "Logic Apps" in Azure services.
+3. Select the Logic App you created.
+4. Click "Workflows" in the Workflow menu on the left.
+5. Click "Add" to add a workflow.
+
+### Configuring a new workflow
+
+You will see a new panel on the right for creating a workflow.
++
+You can specify the details of the new workflow in the panel on the right.
+
+#### Creating a new workflow for the Logic App
+
+To set up a new workflow, fill in these details:
+
+- Workflow Name
+- State type
+
+Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events.
+
+When you have specified the details, click "Create" to begin designing your workflow.
+
+### Designing the workflow
+
+In your new workflow, click the name of the enabled workflow.
+
+You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu.
+
+Next, click "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and click the "Azure" tab below. The Event Grid is not a Logic App Built-in.
++
+When you see the "Azure Event Grid" icon, click on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
+
+Click "When a resource event occurs" to set up a trigger for the Azure Event Grid.
+
+To tell Event Grid how to respond to the trigger, you must specify parameters and add actions.
+
+#### Parameter settings
+
+You need to specify the parameters for the trigger:
+
+- Subscription
+- Resource Type
+- Resource Name
+- Event type item(s)
+
+Fill in the details for subscription, resource type, and resource name. Then you must specify the event types you want to respond to. The event types used in this article are:
+
+- Resource created
+- Resource deleted
+- Resource updated
+
+For more information about event types, see [What FHIR resource events does Events support?](./events-faqs.md).
+
+### Adding an HTTP action
+
+Once you have specified the trigger events, you must add more details. Click the "+" below the "When a resource event occurs" button.
+
+You need to add a specific action. Click "Choose an operation" to continue. Then, for the operation, search for "HTTP" and click on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
+
+The options in this example are:
+
+- Method is "Get"
+- URL is "concat('https://', triggerBody()?['subject'], '/_history/', triggerBody()?['dataVersion'])".
+- Authentication type is "Managed Identity".
+- Audience is "concat('https://', triggerBody()?['data']['resourceFhirAccount'])"
+
+### Allow FHIR Reader access to your Logic App
+
+At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access:
+
+1. The first step is to go back to your Logic App and click the Identity menu item.
+
+2. In the System assigned tab, make sure the Status is "On".
+
+3. Click on Azure role assignments. Click "Add role assignment".
+
+4. Specify the following:
+
+ - Scope = Subscription
+ - Subscription = your subscription
+ - Role = FHIR Data Reader.
+
+When you have specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, click "Review + assign" to assign the role.
+
+### Add a condition
+
+After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Click the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Click on "Built-in" to display the Control icon. Next click Actions and choose Condition.
+
+When the condition is ready, you can specify what actions happen if the condition is true or false.
+
+### Choosing a condition criteria
+
+In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on "Condition" in the workflow on the left. You will then see a set of condition choices on the right.
+
+Under the "And" box, add these two conditions:
+
+- resourceType
+- Event Type
+
+#### resourceType
+
+The expression for getting the resourceType is `body('HTTP')?['resourceType']`.
+
+#### Event Type
+
+You can select Event Type from the Dynamic Content.
+
+Here is an example of the Condition criteria:
++
+#### Save your workflow
+
+When you have entered the condition criteria, save your workflow.
+
+#### Workflow dashboard
+
+To check the status of your workflow, click Overview in the workflow menu. Here is a dashboard for a workflow:
++
+You can do the following operations from your workflow dashboard:
+
+- Run trigger
+- Refresh
+- Enable
+- Disable
+- Delete
+
+### Condition testing
+
+Save your workflow by clicking the "Save" button.
+
+To test your new workflow, do the following steps:
+
+1. Add a new Patient FHIR Resource to your FHIR Service.
+2. Wait a moment or two and then check the Overview webpage of your Logic App workflow.
+3. The event should be shaded in green if the action was successful.
+4. If it failed, the event will be shaded in red.
+
+Here is an example of a workflow trigger success operation:
++
+## Next steps
+
+For more information about FHIR events, see
+
+>[!div class="nextstepaction"]
+>[What are Events?](./events-overview.md)
+
+(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
lab-services How To Create Manage Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-manage-template.md
In this step, you publish the template VM. When you publish the template VM, Azu
2. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**. ![Publish template - number of VMs](./media/how-to-create-manage-template/publish-template-number-vms.png)
-3. You see the **status of publishing** the template on page. This process can take up to an hour.
+3. You see the **status of publishing** the template on page. If using [Azure Lab Services April 2022 Update (preview)](lab-services-whats-new.md), publishing can take up to 20 minutes.
![Publish template - progress](./media/how-to-create-manage-template/publish-template-progress.png) 4. Wait until the publishing is complete and then switch to the **Virtual machines pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. Confirm that you see virtual machines that are in **Unassigned** state. These VMs arenΓÇÖt assigned to students yet. They should be in **Stopped** state. You can start a student VM, connect to the VM, stop the VM, and delete the VM on this page. You can start them in this page or let your students start the VMs.
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
This quickstart requires version 2.0.28 or later of the Azure CLI. If you're usi
An Azure resource group is a logical container into which you deploy and manage your Azure resources.
-Create a resource group with [az group create](/cli/azure/group#az_group_create).
+Create a resource group with [az group create](/cli/azure/group#az-group-create).
```azurecli az group create \
When you create an internal load balancer, a virtual network is configured as th
Before you deploy VMs and test your load balancer, create the supporting virtual network and subnet. The virtual network and subnet will contain the resources deployed later in this article.
-Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create).
+Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
```azurecli az network vnet create \
In this example, you'll create an Azure Bastion host. The Azure Bastion host is
### Create a bastion public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IP address for the Azure Bastion host.
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host.
```azurecli az network public-ip create \
az network public-ip create \
``` ### Create a bastion subnet
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a subnet.
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a subnet.
```azurecli az network vnet subnet create \
az network vnet subnet create \
### Create the bastion host
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a host.
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host.
```azurecli az network bastion create \
This section details how you can create and configure the following components o
### Create the load balancer resource
-Create an internal load balancer with [az network lb create](/cli/azure/network/lb#az_network_lb_create).
+Create an internal load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create).
```azurecli az network lb create \
A health probe checks all virtual machine instances to ensure they can send netw
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az_network_lb_probe_create).
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create).
```azurecli az network lb probe create \
A load balancer rule defines:
* The required source and destination port
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create).
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create).
```azurecli az network lb rule create \
Create a load balancer rule with [az network lb rule create](/cli/azure/network/
For a standard load balancer, the VMs in the backend pool are required to have network interfaces that belong to a network security group.
-To create a network security group, use [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create).
+To create a network security group, use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
```azurecli az network nsg create \
To create a network security group, use [az network nsg create](/cli/azure/netwo
## Create a network security group rule
-To create a network security group rule, use [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create).
+To create a network security group rule, use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create).
```azurecli az network nsg rule create \
In this section, you create:
### Create network interfaces for the virtual machines
-Create two network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create).
+Create two network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
```azurecli array=(myNicVM1 myNicVM2)
Create two network interfaces with [az network nic create](/cli/azure/network/ni
### Create the availability set for the virtual machines
-Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az_vm_availability_set_create).
+Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create).
```azurecli az vm availability-set create \
Create the availability set with [az vm availability-set create](/cli/azure/vm/a
### Create the virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm#az_vm_create).
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
```azurecli array=(1 2)
It can take a few minutes for the VMs to deploy.
## Add virtual machines to the backend pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add).
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add).
```azurecli array=(VM1 VM2)
Add the virtual machines to the backend pool with [az network nic ip-config addr
## Create test virtual machine
-Create the network interface with [az network nic create](/cli/azure/network/nic#az_network_nic_create).
+Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
```azurecli az network nic create \
Create the network interface with [az network nic create](/cli/azure/network/nic
--subnet myBackEndSubnet \ --network-security-group myNSG ```
-Create the virtual machine with [az vm create](/cli/azure/vm#az_vm_create).
+Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create).
```azurecli az vm create \
You might need to wait a few minutes for the virtual machine to deploy.
## Install IIS
-Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install IIS on the backend virtual machines and set the default website to the computer name.
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to install IIS on the backend virtual machines and set the default website to the computer name.
```azurecli array=(myVM1 myVM2)
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
This quickstart requires version 2.0.28 or later of the Azure CLI. If you're usi
An Azure resource group is a logical container into which you deploy and manage your Azure resources.
-Create a resource group with [az group create](/cli/azure/group#az_group_create).
+Create a resource group with [az group create](/cli/azure/group#az-group-create).
```azurecli az group create \
When you create an internal load balancer, a virtual network is configured as th
Before you deploy VMs and test your load balancer, create the supporting virtual network and subnet. The virtual network and subnet will contain the resources deployed later in this article.
-Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create).
+Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
```azurecli az network vnet create \
In this example, you'll create an Azure Bastion host. The Azure Bastion host is
### Create a bastion public IP address
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public IP address for the Azure Bastion host.
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host.
```azurecli az network public-ip create \
az network public-ip create \
``` ### Create a bastion subnet
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a subnet.
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a subnet.
```azurecli az network vnet subnet create \
az network vnet subnet create \
### Create the bastion host
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a host.
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a host.
```azurecli az network bastion create \
This section details how you can create and configure the following components o
### Create the load balancer resource
-Create an internal load balancer with [az network lb create](/cli/azure/network/lb#az_network_lb_create).
+Create an internal load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create).
```azurecli az network lb create \
A health probe checks all virtual machine instances to ensure they can send netw
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az_network_lb_probe_create).
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create).
```azurecli az network lb probe create \
A load balancer rule defines:
* The required source and destination port
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az_network_lb_rule_create).
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create).
```azurecli az network lb rule create \
Create a load balancer rule with [az network lb rule create](/cli/azure/network/
For a standard load balancer, the VMs in the backend pool are required to have network interfaces that belong to a network security group.
-To create a network security group, use [az network nsg create](/cli/azure/network/nsg#az_network_nsg_create).
+To create a network security group, use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create).
```azurecli az network nsg create \
To create a network security group, use [az network nsg create](/cli/azure/netwo
## Create a network security group rule
-To create a network security group rule, use [az network nsg rule create](/cli/azure/network/nsg/rule#az_network_nsg_rule_create).
+To create a network security group rule, use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create).
```azurecli az network nsg rule create \
In this section, you create:
### Create network interfaces for the virtual machines
-Create two network interfaces with [az network nic create](/cli/azure/network/nic#az_network_nic_create).
+Create two network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
```azurecli array=(myNicVM1 myNicVM2)
Create two network interfaces with [az network nic create](/cli/azure/network/ni
### Create the virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm#az_vm_create).
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
```azurecli array=(1 2)
It can take a few minutes for the VMs to deploy.
## Add virtual machines to the backend pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az_network_nic_ip_config_address_pool_add).
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add).
```azurecli array=(VM1 VM2)
To provide outbound internet access for resources in the backend pool, create a
### Create public IP
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a single IP for the outbound connectivity.
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a single IP for the outbound connectivity.
```azurecli az network public-ip create \
Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public
### Create NAT gateway resource
-Use [az network nat gateway create](/cli/azure/network/nat#az_network_nat_gateway_create) to create the NAT gateway resource. The public IP created in the previous step is associated with the NAT gateway.
+Use [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gateway-create) to create the NAT gateway resource. The public IP created in the previous step is associated with the NAT gateway.
```azurecli az network nat gateway create \
Use [az network nat gateway create](/cli/azure/network/nat#az_network_nat_gatewa
### Associate NAT gateway with subnet
-Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update).
+Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update).
```azurecli az network vnet subnet update \
Configure the source subnet in virtual network to use a specific NAT gateway res
## Create test virtual machine
-Create the network interface with [az network nic create](/cli/azure/network/nic#az_network_nic_create).
+Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create).
```azurecli az network nic create \
Create the network interface with [az network nic create](/cli/azure/network/nic
--subnet myBackEndSubnet \ --network-security-group myNSG ```
-Create the virtual machine with [az vm create](/cli/azure/vm#az_vm_create).
+Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create).
```azurecli az vm create \
You might need to wait a few minutes for the virtual machine to deploy.
## Install IIS
-Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install IIS on the backend virtual machines and set the default website to the computer name.
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to install IIS on the backend virtual machines and set the default website to the computer name.
```azurecli array=(myVM1 myVM2)
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
For more information about the ways workflows can access and work with apps, dat
* [Connectors for Azure Logic Apps](../connectors/apis-list.md)
-* [Managed connectors for Azure Logic Apps](../connectors/built-in.md)
+* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in triggers and actions for Azure Logic Apps](../connectors/managed.md)
+* [Built-in triggers and actions for Azure Logic Apps](../connectors/built-in.md)
* [B2B enterprise integration solutions with Azure Logic Apps](logic-apps-enterprise-integration-overview.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
| Service tag | Protocol | Port | | -- |:--:|:--:| | AzureActiveDirectory | TCP | 80, 443 |
- | AzureMachineLearning | TCP | 443 |
+ | AzureMachineLearning | TCP | 443, 8787, 18881 |
| AzureResourceManager | TCP | 443 | | Storage.region | TCP | 443 | | AzureFrontDoor.FrontEnd</br>* Not needed in Azure China. | TCP | 443 | | AzureContainerRegistry.region | TCP | 443 |
- | MicrosoftContainerRegistry.region | TCP | 443 |
+ | MicrosoftContainerRegistry.region</br>**Note** that this tag has a dependency on the **AzureFrontDoor.FirstParty** tag | TCP | 443 |
| AzureKeyVault.region | TCP | 443 | > [!TIP]
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
Let us tackle these steps below
### 1. Connect to the workspace
-To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python). Check this [example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
+To connect to the workspace, you need identifier parameters - a subscription, resource group and workspace name. You'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. To authenticate, you use the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python&preserve-view=true). Check this [example](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/jobs/configuration.ipynb) for more details on how to configure credentials and connect to a workspace.
```python #import required libraries
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
command_job_for_sweep = command_job(
This code defines a search space with two parameters - `learning_rate` and `keep_probability`. `learning_rate` has a normal distribution with mean value 10 and a standard deviation of 3. `keep_probability` has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
-For the CLI, you can use the [sweep job YAML schema](/articles/machine-learning/reference-yaml-job-sweep)., to define the search space in your YAML:
+For the CLI, you can use the [sweep job YAML schema](/azure/machine-learning/reference-yaml-job-sweep)., to define the search space in your YAML:
```YAML search_space: conv_size:
Specify the parameter sampling method to use over the hyperparameter space. Azur
### Random sampling
-[Random sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.randomparametersampling) supports discrete and continuous hyperparameters. It supports early termination of low-performance jobs. Some users do an initial search with random sampling and then refine the search space to improve results.
+[Random sampling](/azure/machine-learning/how-to-tune-hyperparameters) supports discrete and continuous hyperparameters. It supports early termination of low-performance jobs. Some users do an initial search with random sampling and then refine the search space to improve results.
In random sampling, hyperparameter values are randomly selected from the defined search space. After creating your command job, you can use the sweep parameter to define the sampling algorithm.
sweep_job = command_job_for_sweep.sweep(
### Grid sampling
-[Grid sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.gridparametersampling) supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance jobs.
+Grid sampling supports discrete hyperparameters. Use grid sampling if you can budget to exhaustively search over the search space. Supports early termination of low-performance jobs.
Grid sampling does a simple grid search over all possible values. Grid sampling can only be used with `choice` hyperparameters. For example, the following space has six samples:
sweep_job = command_job_for_sweep.sweep(
### Bayesian sampling
-[Bayesian sampling](/python/api/azure-ai-ml/azure.ai.ml.sweep.bayesianparametersampling) is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric.
+Bayesian sampling is based on the Bayesian optimization algorithm. It picks samples based on how previous samples did, so that new samples improve the primary metric.
Bayesian sampling is recommended if you have enough budget to explore the hyperparameter space. For best results, we recommend a maximum number of jobs greater than or equal to 20 times the number of hyperparameters being tuned.
sweep_job = command_job_for_sweep.sweep(
## <a name="specify-objective-to-optimize"></a> Specify the objective of the sweep
-Define the objective of your sweep job by specifying the [primary metric](/python/api/azure-ai-ml/azure.ai.ml.sweep.primary_metric) and [goal](/python/api/azure-ai-ml/azure.ai.ml.sweep.goal) you want hyperparameter tuning to optimize. Each training job is evaluated for the primary metric. The early termination policy uses the primary metric to identify low-performance jobs.
+Define the objective of your sweep job by specifying the primary metric and goal you want hyperparameter tuning to optimize. Each training job is evaluated for the primary metric. The early termination policy uses the primary metric to identify low-performance jobs.
* `primary_metric`: The name of the primary metric needs to exactly match the name of the metric logged by the training script * `goal`: It can be either `Maximize` or `Minimize` and determines whether the primary metric will be maximized or minimized when evaluating the jobs.
This code configures the hyperparameter tuning experiment to use a maximum of 20
## Configure hyperparameter tuning experiment
-To [configure your hyperparameter tuning](/python/api/azure-ai-ml/azure.ai.ml.train.sweep) experiment, provide the following:
+To configure your hyperparameter tuning experiment, provide the following:
* The defined hyperparameter search space * Your sampling algorithm * Your early termination policy
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
For more information on installing and using the different extensions, see the f
For more information on installing and using the different SDK versions:
-* `azureml-core` - [Install the Azure Machine Learning SDK (v1) for Python](/python/api/overview/azure/ml/install?view=azure-ml-py)
+* `azureml-core` - [Install the Azure Machine Learning SDK (v1) for Python](/python/api/overview/azure/ml/install?view=azure-ml-py&preserve-view=true )
* `azure-ai-ml` - [Install the Azure Machine Learning SDK (v2) for Python](https://aka.ms/sdk-v2-install)
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
Previously updated : 05/25/2022 Last updated : 06/06/2022 # Plan a Microsoft Dynamics 365 offer
The following table describes the transaction process of each listing option.
## Customer leads
-When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Connecting to a CRM is required if you want to enable a test drive; otherwise, connecting to a CRM is optional. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
+When you're publishing an offer to the commercial marketplace with Partner Center, you'll want to connect it to your Customer Relationship Management (CRM) system. This lets you receive customer contact information as soon as someone expresses interest in or uses your product. Partner Center supports Azure table, Dynamics 365 Customer Engagement, HTTPS endpoint, Marketo, and Salesforce.
## Legal
network-function-manager Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/requirements.md
Customers can choose from one or more Network Function Manager [partners](partne
Each partner has networking requirements for deployment of their network function to an Azure Stack Edge device. Refer to the product documentation from the network function partners to complete the following configuration tasks: * [Configure network on different ports](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
-* [Enable compute network on your Azure Stack Edge device](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips).
+* [Enable compute network on your Azure Stack Edge device](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches).
## <a name="account"></a>Azure account
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123
You need to install the postgresql-client tool to be able to connect to the server. ```bash
-sudo apt-getupdate
+sudo apt-get update
sudo apt-get install postgresql-client ```
purview Tutorial Atlas 2 2 Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-atlas-2-2-apis.md
Previously updated : 04/18/2021 Last updated : 04/18/2022 # Customer intent: I can use the new APIs available with Atlas 2.2
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Previously updated : 04/10/2022 Last updated : 06/02/2022 # Debug an Azure Cognitive Search skillset in Azure portal
A debug session is a cached indexer and skillset execution, scoped to a single d
## Prerequisites
-+ An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
++ An existing enrichment pipeline, including a data source, a skillset, an indexer, and an index.
- A debug session works with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The MongoDB API (preview) of Cosmos DB is currently not supported.
++ You must have at least **Contributor** role over the Search service, to be able to run Debug Sessions.+++ An Azure Storage account, used to save session state.+++ You must have at least **Storage Blob Data Contributor** role assgined over the Storage account. +++ If the Azure Storage account has configured a firewall, you must configure it to [provide access to the Search service](search-indexer-howto-access-ip-restricted.md).++
+## Limitations
+
+A Debug Session works with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The following list notes the exceptions:
+++ The MongoDB API (preview) of Cosmos DB is currently not supported.+++ For the SQL API of Cosmos DB, if a row fails during index and there is no corresponding metadata, the debug session might not pick the correct row.+++ For the SQL API of Cosmos DB, if a partitioned collection was previously non-partitioned, a Debug Session won't find the document.
-+ Azure Storage, used to save session state.
## Create a debug session
You can edit the skill definition in the portal.
At this point, new requests from your debug session should now be sent to your local Azure Function. You can use breakpoints in your Visual Studio code to debug your code or run step by step. +
+## Expected behaviors
+++ If debugging for a CosmosDB SQL data source, if the CosmosDB SQL collection was previously non-partitioned, and then it was changed to a partitioned collection on the CosmosDB end, Debug Sessions won't be able to pick up the correct document from CosmosDB.++ CosmosDB SQL errors omit some metadata about what row failed, so in some cases, Debug Sessions wonΓÇÖt pick the correct row.++ ## Next steps Now that you understand the layout and capabilities of the Debug Sessions visual editor, try the tutorial for a hands-on experience.
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
Last updated 05/31/2022
Learn how AI enrichment in Azure Cognitive Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to create searchable content in a search index.
-In this quickstart, you'll run the **Import data** wizard to apply skills that transform and enrich content during indexing. Output is a searchable index containing image text, translated text, and entities. Enriched content is queryable in the portal using [Search explorer](search-explorer.md).
+In this quickstart, you'll run the **Import data** wizard to apply skills that transform and enrich content during indexing. Output is a searchable index containing AI-generated image text, captions, and entities. Enriched content is queryable in the portal using [Search explorer](search-explorer.md).
To prepare, you'll create a few resources and upload sample files before running the wizard.
Cognitive skills indexing takes longer to complete than typical text-based index
To check details about execution status, select an indexer from the list, and then select **Success** (or **Failed**) to view execution details.
-In this demo, there is one warning. It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus could not provide a text input to the downstream Entity Recognition skill.
+In this demo, there is one warning: `"Could not execute skill because one or more skill input was invalid." It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus could not provide a text input to the downstream Entity Recognition skill.
Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you'll begin to notice patterns and learn which warnings are safe to ignore.
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
It's not recommended. Only **rowversion** allows for reliable data synchronizati
+ You can ensure that when the indexer runs, there are no outstanding transactions on the table thatΓÇÖs being indexed (for example, all table updates happen as a batch on a schedule, and the Azure Cognitive Search indexer schedule is set to avoid overlapping with the table update schedule). + You periodically do a full reindex to pick up any missed rows.+
+**Q: Can I use Always Encrypted feature when indexing from Azure SQL database?
+
+[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
If you are using the Azure portal to create an indexer, you must grant the porta
To get the portal IP address, ping `stamp2.ext.search.windows.net`, which is the domain of the traffic manager. The request will time out, but the IP address be visible in the status message. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
-> [!NOTE]
-> Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
+Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
+ ## Next steps With configuration out of the way, you can now specify a SQL Server on Azure VM as the data source for an Azure Cognitive Search indexer. For more information, see [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).++
+## FAQ
+
+**Q: Can I use Always Encrypted feature when indexing from SQL Server?
+
+[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
+
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
Previously updated : 03/10/2022 Last updated : 05/24/2022 # Indexer connections to Azure SQL Managed Instance through a public endpoint
Copy the connection string to use in the search indexer's data source connection
## Next steps With configuration out of the way, you can now specify a [SQL Managed Instance as an indexer data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).+
+## FAQ
+
+**Q: Can I use Always Encrypted feature when indexing from SQL Managed Instance?
+
+[Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) columns are not currently supported by Cognitive Search indexers.
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
description: Set up a SharePoint indexer to automate indexing of document librar
-+ Previously updated : 01/19/2022 Last updated : 06/01/2022 # Index data from SharePoint document libraries
sentinel Fusion Scenario Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/fusion-scenario-reference.md
This document lists the types of scenario-based multistage attacks, grouped by t
Since [Fusion](fusion.md) correlates multiple signals from various products to detect advanced multistage attacks, successful Fusion detections are presented as **Fusion incidents** on the Microsoft Sentinel **Incidents** page and not as **alerts**, and are stored in the *Incidents* table in **Logs** and not in the *SecurityAlerts* table.
-In order to enable these Fusion-powered attack detection scenarios, any data sources listed must be ingested to your Log Analytics workspace.
+In order to enable these Fusion-powered attack detection scenarios, any data sources listed must be ingested to your Log Analytics workspace. For scenarios with scheduled analytics rules, follow the instructions in [Configure scheduled analytics rules for Fusion detections](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).
> [!NOTE] > Some of these scenarios are in **PREVIEW**. They will be so indicated.
This scenario is currently in **PREVIEW**.
**MITRE ATT&CK techniques:** Command and Scripting Interpreter (T1059)
-**Data connector sources:** Microsoft Defender for Endpoint (formerly Microsoft Defender Advanced Threat Protection, or MDATP), Palo Alto Networks
+**Data connector sources:** Microsoft Defender for Endpoint (formerly Microsoft Defender Advanced Threat Protection, or MDATP), Microsoft Sentinel (scheduled analytics rule)
**Description:** Fusion incidents of this type indicate that an outbound connection request was made via a PowerShell command, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network and is trying to perform malicious actions. Connection attempts by PowerShell that follow this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of PowerShell. However, the PowerShell command execution followed by suspicious inbound Firewall activity increases the confidence that PowerShell is being used in a malicious manner and should be investigated further. In Palo Alto logs, Microsoft Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
This scenario is currently in **PREVIEW**.
**MITRE ATT&CK techniques:** Windows Management Instrumentation (T1047)
-**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
+**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Microsoft Sentinel (scheduled analytics rule)
**Description:** Fusion incidents of this type indicate that Windows Management Interface (WMI) commands were remotely executed on a system, and following that, suspicious inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker may have gained access to your network and is attempting to move laterally, escalate privileges, and/or execute malicious payloads. As with all ΓÇ£living off the landΓÇ¥ attacks, this activity could be a legitimate use of WMI. However, the remote WMI command execution followed by suspicious inbound Firewall activity increases the confidence that WMI is being used in a malicious manner and should be investigated further. In Palo Alto logs, Microsoft Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
This scenario is currently in **PREVIEW**.
**MITRE ATT&CK techniques:** Encrypted Channel (T1573), Proxy (T1090)
-**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
+**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Microsoft Sentinel (scheduled analytics rule)
**Description:** Fusion incidents of this type indicate that an outbound connection request was made to the TOR anonymization service, and following that, anomalous inbound activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network and is trying to conceal their actions and intent. Connections to the TOR network following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Microsoft Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
This scenario is currently in **PREVIEW**.
**MITRE ATT&CK techniques:** Not applicable
-**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
+**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Microsoft Sentinel (scheduled analytics rule)
**Description:** Fusion incidents of this type indicate that an outbound connection to an IP address with a history of unauthorized access attempts was established, and following that, anomalous activity was detected by the Palo Alto Networks Firewall. This evidence suggests that an attacker has likely gained access to your network. Connection attempts following this pattern could be an indication of malware command and control activity, requests for the download of additional malware, or an attacker establishing remote interactive access. In Palo Alto logs, Microsoft Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
This scenario is currently in **PREVIEW**.
**MITRE ATT&CK techniques:** Exploit Public-Facing Application (T1190), Exploitation for Client Execution (T1203), Exploitation of Remote Services(T1210), Exploitation for Privilege Escalation (T1068)
-**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Palo Alto Networks
+**Data connector sources:** Microsoft Defender for Endpoint (formerly MDATP), Microsoft Sentinel (scheduled analytics rule)
**Description:** Fusion incidents of this type indicate that non-standard uses of protocols, resembling the use of attack frameworks such as Metasploit, were detected, and following that, suspicious inbound activity was detected by the Palo Alto Networks Firewall. This may be an initial indication that an attacker has exploited a service to gain access to your network resources or that an attacker has already gained access and is trying to further exploit available systems/services to move laterally and/or escalate privileges. In Palo Alto logs, Microsoft Sentinel focuses on [threat logs](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/view-and-manage-logs/log-types-and-severity-levels/threat-logs), and traffic is considered suspicious when threats are allowed (suspicious data, files, floods, packets, scans, spyware, URLs, viruses, vulnerabilities, wildfire-viruses, wildfires). Also reference the Palo Alto Threat Log corresponding to the [Threat/Content Type](https://docs.paloaltonetworks.com/pan-os/8-1/pan-os-admin/monitoring/use-syslog-for-monitoring/syslog-field-descriptions/threat-log-fields.html) listed in the Fusion incident description for additional alert details.
service-connector Concept Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-availability.md
+
+ Title: High availability for Service Connector
+description: This article covers availability zones, zone redundancy, disaster recovery, and cross-region failover for Service Connector.
++++ Last updated : 05/24/2022+
+#Customer intent: As an Azure developer, I want to understand the availability of my connection created with Service Connector.
++
+# High availability for Service Connector
+
+Service Connector supports Azure availability zones to help you achieve resiliency and reliability for your business-critical workloads. The goal of the high availability architecture in Service Connector is to guarantee that your service connections are up and running at least 99.9% of time, so that you don't have to worry about the effects of potential maintenance operations and outages. Service Connector is designed to provide high availability support for all types of applications you're running on Azure.
+
+Users can distribute Azure compute services across availability zones in many regions. Service Connector is an extension resource provider to these compute services. When you create a service connection in a compute service with availability zones enabled, Azure will also automatically set up the corresponding service connection availability zone for your service connection. Microsoft is responsible for setting up availability zones and disaster recovery for your service connections.
+
+## Zone redundancy in Service Connector
+
+Service Connector is an Azure extension resource provider. It extends Azure App Service, Azure Spring Apps and Azure Container Apps. When you create a new service connection in one of these compute services with Service Connector, there's a connection resource provisioned as part of your top-level parent compute service.
+
+To enable zone redundancy for your connection, you must enable zone redundancy for your compute service. Once the compute service has been configured with zone redundancy, your service connections will also automatically become zone-redundant. For example, if you have an app service with zone redundancy enabled, the platform automatically spreads your app service instances across three zones in the selected region. When you create a service connection in this app service with Service Connector, the service connection resource is also automatically created in the three corresponding zones in the selected region. Traffic is routed to all of your available connection resources. When a zone goes down, the platform detects the lost instances, automatically attempts to find new replacement instances, and spreads the traffic as needed.
+
+> [!NOTE]
+> To create, update, validate and list service connections, Service Connector calls APIs from a compute service and a target service. As Service Connector relies on the responses from both the compute service and the target service, requests to Service Connector in a zone-down scenario may not succeed if the target service can't be reached. This limitation applies to App Service, Container Apps and Spring Apps.
+
+## How to create a zone-redundant service connection with Service Connector
+
+Follow the instructions below to create a zone-redundant Service Connection in App Service using the Azure CLI or the Azure portal. The same process can be used to create a zone-redundant connection for Spring Apps and Container Apps compute services.
+
+### [Azure CLI](#tab/azure-cli)
+
+To enable zone redundancy for a service connection using the Azure CLI, you must first create a zone-redundant app service.
+
+1. Create an App Service plan and include a `--zone-redundant` parameter. Optionally include the `--number-of-workers` parameter to specify capacity. Learn more details in [How to deploy a zone-redundant App Service](../app-service/environment/overview-zone-redundancy.md).
+
+ ```azurecli
+ az appservice plan create --resource-group MyResourceGroup --name MyPlan --zone-redundant --number-of-workers 6
+ ```
+
+1. Create an application in App Service and a connection to your Blob Storage account or another target service of your choice.
+
+ ```azurecli
+ az webapp create --name MyApp --plan MyPlan resource-group MyResourceGroup
+ az webapp connection create storage-blob
+ ```
+
+### [Portal](#tab/azure-portal)
+
+To enable zone redundancy for a service connection in App Service using the Azure portal, follow the process below:
+
+1. In the Azure portal, in the **Search resources, services, and docs (G+/)**, enter **App Services** and select **App Services**.
+1. Select **Create** and fill out the form. In the first tab, under **Zone redundancy**, select **Enabled**.
+
+ :::image type="content" source="media/enable-zone-redundancy.png" alt-text="Screenshot of the Azure portal, enabling zone redundancy in App Services.":::
+
+1. Select **Review + create** and then **Create**.
+1. In the App Service instance, select **Service Connector** from the left menu and select **Create**.
+1. Fill out the form to create the connection.
+++
+As you enabled zone redundancy for your App Service, the service connection is also zone redundant.
+
+> [!TIP]
+> Enabling zone redundancy for your target service is recommended. In a zone-down scenario, traffic to your connection will automatically be spread to other zones. However, creating, validating and updating connections rely on management APIs from the target service. If a target service doesnΓÇÖt support zone redundancy or doesnΓÇÖt have zone redundancy enabled, these operations will fail.
+
+## Understand disaster recovery and resiliency in Service Connector
+
+Disaster recovery is the process of restoring application functionality after a catastrophic loss.
+
+In the cloud, we acknowledge upfront that failures will certainly happen. Instead of trying to prevent failures altogether, the goal is to minimize the effects of a single failing component. If there's a disaster, Service Connector will fail over to the paired region. Customers donΓÇÖt need to do anything if the outage is decided/declared by the Service Connector team.
+
+We'll use the terms RTO (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Service Connector and the recovery to full availability. We'll use RPO (Recovery Point Objective), to indicate the time between the last operation correctly restored and the time of the start of the outage affecting Service Connector. Expected and maximum RPO is 24 hours and RTO is 24 hours.
+
+Operations against Service Connector may fail during the disaster time, before the failover happens. Once the failover is completed, data will be restored and the customer isn't required to take any action.
+
+Service connector handles business continuity and disaster recovery (BCRD) for storage and compute. The platform strives to have as minimal of an impact as possible in case of issues in storage/compute, in any region. The data layer design prioritizes availability over latency in the event of a disaster ΓÇô meaning that if a region goes down, Service Connector will attempt to serve the end-user request from its paired region.
+
+During the failover action, Service Connector handles the DNS remapping to the available regions. All data and action from customer view serves as usual after failover.
+Service Connector will change its DNS in about one hour. Performing a manual failover would take more time. As Service Connector is a resource provider built on top of other Azure services, the actual time depends on the failover time of the underlying services.
+
+## Disaster recovery region support
+
+Service Connector currently supports the following region pairs. In the event of a primary region outage, failover to the secondary region starts automatically.
+
+| Primary | Secondary |
+|--|-|
+| East US 2 EUAP | East US |
+| West Central US | West Central US 2 |
+| West Europe | North Europe |
+| North Europe | West Europe |
+| East US | West US 2 |
+| West US 2 | East US |
+
+## Cross-region failover
+
+Microsoft is responsible for handling cross-region failovers. Service Connector runs health checks every 10 minutes and regional failovers are detected and handled in the Service Connector backend. The failover process doesnΓÇÖt require any changes in the customerΓÇÖs applications or compute service configurations. Service Connector uses an active-passive cluster configuration with automatic failover. After a disaster recovery, customers can use the full functionalities provided by Service Connector.
+
+The health check that runs every 10 minutes simulates user behavior by creating, validating, and updating connections to target services in each of the compute services supported by Service Connector. Microsoft will start to analyze and launch a Service Connector failover if we meet any of the following conditions:
+
+- The service health check fails three times in a row
+- Service ConnectorΓÇÖs dependent services declare an outage
+- Customers report a region outage
+
+Requests to service connections are impacted during a failover. Once the failover is complete, service connection data is restored. You can check the [Azure status page](https://status.azure.com/en-us/status) to check the status of all Azure services.
+
+## Next steps
+
+Go the concept article below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
+
+ Title: Integrate Azure SQL Database with Service Connector
+description: Integrate SQL into your application with Service Connector
++++ Last updated : 06/02/2022++
+# Integrate Azure SQL Database with Service Connector
+
+This page shows all the supported compute services, clients, and authentication types to connect services to Azure SQL Database instances, using Service Connector. This page also shows the default environment variable names and application properties needed to create service connections. You might still be able to connect to an Azure SQL Database instance using other programming languages, without using Service Connector. Learn more about the [Service Connector environment variable naming conventions](concept-service-connector-internals.md).
+
+## Supported compute services
+
+- Azure App Service
+- Azure Spring Cloud
+
+## Supported authentication types and clients
+
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|:--:|::|::|:--:|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| PHP | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| Python - Django | | | ![yes icon](./media/green-check.png) | |
+| Ruby | | | ![yes icon](./media/green-check.png) | |
+
+## Default environment variable names or application properties
+
+Use the environment variable names and application properties listed below to connect a service to Azure SQL Database using a secret and a connection string.
+
+### Connect an Azure App Service instance
+
+Use the connection details below to connect Azure App Service instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-db>`, `<sql-user>`, and `<sql-pass>` with your server name, database name, user ID and password.
+
+#### Azure App Service with .NET (sqlClient)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-db>;User ID=<sql-user>;Password=<sql-pass>` |
+
+#### Azure App Service with Java Database Connectivity (JDBC)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;user=<sql-user>;password=<sql-pass>;` |
+
+#### Azure App Service with Java Spring Boot (spring-boot-starter-jdbc)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
+> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;` |
+> | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-pass>` |
+
+#### Azure App Service with Go (go-mssqldb)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> | | | |
+> | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-db>;user id=<sql-user>;password=<sql-pass>;` |
+
+#### Azure App Service with Node.js
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-user>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+
+#### Azure App Service with PHP
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | AZURE_SQL_SERVERNAME | Azure SQL Database servername | `<sql-server>.database.windows.net` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
+> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-user>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+
+#### Azure App Service with Python (pyobdc)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | AZURE_SQL_SERVER | Azure SQL Database server | `<sql-server>.database.windows.net` |
+> | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-user>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+
+#### Azure App Service with Django (mssql-django)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` |
+> | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
+> | AZURE_SQL_NAME | Azure SQL Database name | `<sql-db>` |
+> | AZURE_SQL_USER | Azure SQL Database user | `<sql-user>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+
+#### Azure App Service with Ruby
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|--|-|
+> | AZURE_SQL_HOST | Azure SQL Database host | `<sql-server>.database.windows.net` |
+> | AZURE_SQL_PORT | Azure SQL Database port | `1433` |
+> | AZURE_SQL_DATABASE | Azure SQL Database database | `<sql-db>` |
+> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-user>` |
+> | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-pass>` |
+
+### Connect an Azure Spring Cloud instance
+
+Use the connection details below to connect Azure Spring Cloud instances with Java Spring Boot.
+
+#### Azure Spring Cloud with Java Spring Boot (spring-boot-starter-jdbc)
+
+> [!div class="mx-tdBreakAll"]
+> | Default environment variable name | Description | Sample value |
+> |--|-|-|
+> | spring.datasource.url | Azure SQL Database datasource URL | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-db>;` |
+> | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` |
+> | spring.datasource.password | Azure SQL Database datasource password | `<sql-pass>` |
+
+## Next steps
+
+Follow the tutorial listed below to learn more about Service Connector.
+
+> [!div class="nextstepaction"]
+> [Learn about Service Connector concepts](./concept-service-connector-internals.md)
spring-cloud Tutorial Managed Identities Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-mysql.md
The following video describes how to manage secrets using Azure Key Vault.
## Create a resource group
-A resource group is a logical container where Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az_group_create):
+A resource group is a logical container where Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az-group-create):
```azurecli az group create --location <myLocation> --name <myResourceGroup>
az group create --location <myLocation> --name <myResourceGroup>
## Set up your Key Vault
-To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#az_keyvault_create):
+To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#az-keyvault-create):
> [!Important] > Each Key Vault must have a unique name. Replace *\<myKeyVaultName>* with the name of your Key Vault in the following examples.
az keyvault create --name <myKeyVaultName> -g <myResourceGroup>
Make a note of the returned `vaultUri`, which will be in the format `https://<your-keyvault-name>.vault.azure.net`. It will be used in the following step.
-You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az_keyvault_secret_set):
+You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set):
```azurecli az keyvault secret set \
storage File Sync How To Manage Tiered Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md
description: Tips and PowerShell commandlets to help you manage tiered files
Previously updated : 04/13/2021 Last updated : 06/06/2022
There are several ways to check whether a file has been tiered to your Azure fil
> [!WARNING] > The `fsutil reparsepoint` utility command also has the ability to delete a reparse point. Do not execute this command unless the Azure File Sync engineering team asks you to. Running this command might result in data loss.
+## How to exclude files or folders from being tiered
+
+If you want to exclude files or folders from being tiered and remain local on the Windows Server, you can configure the **GhostingExclusionList** registry setting under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. You can exclude files by file name, file extension or path.
+
+To exclude files or folders from cloud tiering, perform the following steps:
+1. Open an elevated command prompt.
+2. Run one of the following commands to configure exclusions:
+
+ To exclude certain file extensions from tiering (for example, .one, .lnk, .log), run the following command:
+ **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d .one|.lnk|.log /f**
+
+ To exclude a specific file name from tiering (for example, FileName.vhd), run the following command:
+ **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d FileName.vhd /f**
+
+ To exclude all files under a folder from tiering (for example, D:\ShareRoot\Folder\SubFolder), run the following command:
+ **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d D:\\\\ShareRoot\\\\Folder\\\\SubFolder /f**
+
+ To exclude a combination of file names, file extensions and folders from tiering (for example, D:\ShareRoot\Folder1\SubFolder1,FileName.log,.txt), run the following command:
+ **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d D:\\\\ShareRoot\\\\Folder1\\\\SubFolder1|FileName.log|.txt /f**
+
+3. For the cloud tiering exclusions to take effect, you must restart the Storage Sync Agent service (FileSyncSvc) by running the following commands:
+ **net stop filesyncsvc**
+ **net start filesyncsvc**
+
+### More information
+- If the Azure File Sync agent is installed on a Failover Cluster, the **GhostingExclusionList** registry setting must be created under HKEY_LOCAL_MACHINE\Cluster\StorageSync\SOFTWARE\Microsoft\Azure\StorageSync.
+ - Example: **reg ADD "HKEY_LOCAL_MACHINE\Cluster\StorageSync\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d .one|.lnk|.log /f**
+- Each exclusion in the registry should be separated by a pipe (|) character.
+- Use double backslash (\\\\) when specifying a path to exclude.
+ - Example: **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v GhostingExclusionList /t REG_SZ /d D:\\\\ShareRoot\\\\Folder\\\\SubFolder /f**
+- File name or file type exclusions apply to all server endpoints on the server.
+- You cannot exclude file types from a particular folder only.
+- Exclusions do not apply to files already tiered. Use the [Invoke-StorageSyncFileRecall](#how-to-recall-a-tiered-file-to-disk) cmdlet to recall files already tiered.
+- Use Event ID 9001 in the Telemetry event log on the server to check the cloud tiering exclusions that are configured. The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
+ ## How to exclude applications from cloud tiering last access time tracking When an application accesses a file, the last access time for the file is updated in the cloud tiering database. Applications that scan the file system like anti-virus cause all files to have the same last access time, which impacts when files are tiered.
-To exclude applications from last access time tracking, add the process exclusions to the HeatTrackingProcessNamesExclusionList registry setting.
+To exclude applications from last access time tracking, add the process exclusions to the **HeatTrackingProcessNamesExclusionList** registry setting under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync.
+
+Example: **reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v HeatTrackingProcessNamesExclusionList /t REG_SZ /d "SampleApp.exe|AnotherApp.exe" /f**
+
+If the Azure File Sync agent is installed on a Failover Cluster, the **HeatTrackingProcessNamesExclusionList** registry setting must be created under HKEY_LOCAL_MACHINE\Cluster\StorageSync\SOFTWARE\Microsoft\Azure\StorageSync.
-Example: reg ADD "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync" /v HeatTrackingProcessNamesExclusionList /t REG_SZ /d "SampleApp.exe|AnotherApp.exe" /f
+Example: **reg ADD "HKEY_LOCAL_MACHINE\Cluster\StorageSync\SOFTWARE\Microsoft\Azure\StorageSync" /v HeatTrackingProcessNamesExclusionList /t REG_SZ /d "SampleApp.exe|AnotherApp.exe" /f**
> [!NOTE] > Data Deduplication and File Server Resource Manager (FSRM) processes are excluded by default. Changes to the process exclusion list are honored by the system every 5 minutes.
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
To summarize, our recommendations for migrating data and associated ETL processe
- Leverage standard "built-in" Azure features to minimize the migration workload. -- Identify and understand the most efficient tools for data extraction and loading in both Netezza and Azure environments. Use the appropriate tools in each phase in the process.
+- Identify and understand the most efficient tools for data extraction and loading in both Netezza and Azure environments. Use the appropriate tools in each phase of the process.
- Use Azure facilities, such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Netezza system.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
Netezza recommends collecting statistics as follows:
- Collect statistics on unpopulated tables to set up the interval histogram used in internal processing. This initial collection makes subsequent statistics collections faster. Make sure to recollect statistics after data is added. -- Prototype phase, newly populated tables.
+- Collect prototype phase statistics for newly populated tables.
-- Production phase, after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+- Collect production phase statistics after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
-- Recommendation: collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+- Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
With this approach, standard Teradata utilities, such as Teradata Parallel Data
- The migration process is orchestrated and controlled entirely within the Azure environment.
-#### Migrate data marts: stay physical or go virtual?
+#### When migrating data marts: stay physical or go virtual?
> [!TIP] > Virtualizing data marts can save on storage and processing resources.
To summarize, our recommendations for migrating data and associated ETL processe
- Leverage standard "built-in" Azure features to minimize the migration workload. -- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools in each phase in the process.
+- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools in each phase of the process.
- Use Azure facilities, such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Teradata system.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
Teradata recommends collecting statistics as follows:
- Collect statistics on unpopulated tables to set up the interval histogram used in internal processing. This initial collection makes subsequent statistics collections faster. Make sure to recollect statistics after data is added. -- Prototype phase, newly populated tables.
+- Collect prototype phase statistics for newly populated tables.
-- Production phase, after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
+- Collect production phase statistics after a significant percentage of change to the table or partition (~10% of rows). For high volumes of nonunique values, such as dates or timestamps, it may be advantageous to recollect at 7%.
-- Recommendation: collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
+- Collect production phase statistics after you've created users and applied real world query loads to the database (up to about three months of querying).
- Collect statistics in the first few weeks after an upgrade or migration during periods of low CPU utilization.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
Another new capability in Data Factory is wrangling data flows. This lets busine
:::image type="content" source="../media/6-microsoft-3rd-party-migration-tools/azure-data-factory-wrangling-dataflows.png" border="true" alt-text="Screenshot showing an example of Azure Data Factory wrangling dataflows.":::
-This differs from Excel and Power BI, as Data Factory [wrangling data flows](/azure/data-factory/wrangling-tutorial) use Power Query Online to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark pool notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
+This differs from Excel and Power BI, as Data Factory [wrangling data flows](/azure/data-factory/wrangling-tutorial) use Power Query to generate M code and translate it into a massively parallel in-memory Spark job for cloud-scale execution. The combination of mapping data flows and wrangling data flows in Data Factory lets IT professional ETL developers and business users collaborate to prepare, integrate, and analyze data for a common business purpose. The preceding Data Factory mapping data flow diagram shows how both Data Factory and Azure Synapse Spark pool notebooks can be combined in the same Data Factory pipeline. This allows IT and business to be aware of what each has created. Mapping data flows and wrangling data flows can then be available for reuse to maximize productivity and consistency and minimize reinvention.
#### Link data and analytics in analytical pipelines
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-data-analyst.md
Title: 'Tutorial: Use serverless SQL pool to analyze Azure Open Datasets in Synapse Studio'
-description: This tutorial shows you how to easily perform exploratory data analysis combining different Azure Open Datasets using serverless SQL pool and visualize the results in Synapse Studio.
+ Title: 'Tutorial: Analyze Azure Open Datasets in Synapse Studio'
+description: This tutorial shows you how to perform data analysis combining different Azure Open Datasets using serverless SQL pool and visualize results in Synapse Studio.
Previously updated : 11/20/2020 Last updated : 05/25/2022+ # Tutorial: Explore and Analyze data lakes with serverless SQL pool
-In this tutorial, you learn how to perform exploratory data analysis. You'll combine different Azure Open Datasets using serverless SQL pool. You'll then visualize the results in Synapse Studio for Azure Synapse Analytics.
+In this tutorial, you learn how to perform exploratory data analysis. You combine different Azure Open Datasets using serverless SQL pool. You then visualize the results in Synapse Studio for Azure Synapse Analytics.
-The OPENROWSET(BULK...) function allows you to access files in Azure Storage. [OPENROWSET](develop-openrowset.md) function reads content of a remote data source (for example file) and returns the content as a set of rows.
+The `OPENROWSET(BULK...)` function allows you to access files in Azure Storage. `[OPENROWSET](develop-openrowset.md)` reads content of a remote data source, such as a file, and returns the content as a set of rows.
## Automatic schema inference
-Since data is stored in the Parquet file format, automatic schema inference is available. You can easily query the data without listing the data types of all columns in the files. You also can use the virtual column mechanism and the filepath function to filter out a certain subset of files.
+Since data is stored in the Parquet file format, automatic schema inference is available. You can query the data without listing the data types of all columns in the files. You also can use the virtual column mechanism and the `filepath` function to filter out a certain subset of files.
> [!NOTE]
-> If you are using database with non-default collation (this is default collation SQL_Latin1_General_CP1_CI_AS), you should take into account case sensitivity.
->
-> If you create a database with case sensitive collation then when you specify columns make sure to use correct name of the column.
->
-> Example for a column name 'tpepPickupDateTime' would be correct while 'tpeppickupdatetime' wouldn't work in non-default collation.
+> The default collation is `SQL_Latin1_General_CP1_CI_ASIf`. For a non-default collation, take into account case sensitivity.
+>
+> If you create a database with case sensitive collation when you specify columns, make sure to use correct name of the column.
+>
+> A column name `tpepPickupDateTime` would be correct while `tpeppickupdatetime` wouldn't work in a non-default collation.
-Let's first get familiar with the NYC Taxi data by running the following query:
+This tutorial uses a dataset about [New York City (NYC) Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/):
+
+- Pick-up and drop-off dates and times
+- Pick-up and drop-off locations
+- Trip distances
+- Itemized fares
+- Rate types
+- Payment types
+- Driver-reported passenger counts
+
+To get familiar with the NYC Taxi data, run the following query:
```sql SELECT TOP 100 * FROM
SELECT TOP 100 * FROM
) AS [nyc] ```
-[New York City (NYC) Taxi dataset](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) includes:
--- Pick-up and drop-off dates and times.-- Pick-up and drop-off locations. -- Trip distances.-- Itemized fares.-- Rate types.-- Payment types. -- Driver-reported passenger counts.- Similarly, you can query the Public Holidays dataset by using the following query: ```sql
SELECT TOP 100 * FROM
) AS [holidays] ```
-Lastly, you can also query the Weather Data dataset by using the following query:
+You can also query the Weather Data dataset by using the following query:
```sql SELECT
FROM
) AS [weather] ```
-You can learn more about the meaning of the individual columns in the descriptions of the data sets:
+You can learn more about the meaning of the individual columns in the descriptions of the data sets:
+ - [NYC Taxi](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/) - [Public Holidays](https://azure.microsoft.com/services/open-datasets/catalog/public-holidays/) - [Weather Data](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/) ## Time series, seasonality, and outlier analysis
-You can easily summarize the yearly number of taxi rides by using the following query:
+You can summarize the yearly number of taxi rides by using the following query:
```sql SELECT
ORDER BY 1 ASC
The following snippet shows the result for the yearly number of taxi rides:
-![Yearly number of taxi rides result snippet](./media/tutorial-data-analyst/yearly-taxi-rides.png)
+![Screenshot shows a table of yearly number of taxi rides.](./media/tutorial-data-analyst/yearly-taxi-rides.png)
The data can be visualized in Synapse Studio by switching from the **Table** to the **Chart** view. You can choose among different chart types, such as **Area**, **Bar**, **Column**, **Line**, **Pie**, and **Scatter**. In this case, plot the **Column** chart with the **Category** column set to **current_year**:
-![Column chart showing rides per year](./media/tutorial-data-analyst/column-chart-rides-year.png)
+![Screenshot shows a column chart that displays rides per year.](./media/tutorial-data-analyst/column-chart-rides-year.png)
-From this visualization, you can see a trend of decreasing ride numbers over the years. Presumably, this decrease is due to the recent increased popularity of ride-sharing companies.
+From this visualization, you can see a trend of decreasing ride numbers over the years. Presumably, this decrease is due to the recent increased popularity of ride-sharing companies.
> [!NOTE] > At the time of writing this tutorial, data for 2019 is incomplete. As a result, there's a huge drop in the number of rides for that year.
-Next, let's focus the analysis on a single year, for example, 2016. The following query returns the daily number of rides during that year:
+You can focus the analysis on a single year, for example, 2016. The following query returns the daily number of rides during that year:
```sql SELECT
ORDER BY 1 ASC
The following snippet shows the result for this query:
-![Daily number of rides for 2016 result snippet](./media/tutorial-data-analyst/daily-rides.png)
+![Screenshot shows a table of the daily number of rides for 2016 result.](./media/tutorial-data-analyst/daily-rides.png)
-Again, you can easily visualize data by plotting the **Column** chart with the **Category** column set to **current_day** and the **Legend (series)** column set to **rides_per_day**.
+Again, you can visualize data by plotting the **Column** chart with the **Category** column set to **current_day** and the **Legend (series)** column set to **rides_per_day**.
-![Column chart showing daily number of rides for 2016](./media/tutorial-data-analyst/column-chart-daily-rides.png)
+![Screenshot shows a column chart that displays the daily number of rides for 2016.](./media/tutorial-data-analyst/column-chart-daily-rides.png)
-From the plot chart, you can see there's a weekly pattern, with Saturdays as the peak day. During summer months, there are fewer taxi rides because of vacations. Also, notice some significant drops in the number of taxi rides without a clear pattern of when and why they occur.
+From the plot chart, you can see there's a weekly pattern, with Saturdays as the peak day. During Summer months, there are fewer taxi rides because of vacations. Also, notice some significant drops in the number of taxi rides without a clear pattern of when and why they occur.
-Next, let's see if the drop in rides correlates with public holidays. We can see if there is a correlation by joining the NYC Taxi rides dataset with the Public Holidays dataset:
+Next, see if the drop in rides correlates with public holidays. Check if there's a correlation by joining the NYC Taxi rides dataset with the Public Holidays dataset:
```sql WITH taxi_rides AS (
FROM joined_data
ORDER BY current_day ASC ```
-![NYC Taxi rides and Public Holidays datasets result visualization](./media/tutorial-data-analyst/rides-public-holidays.png)
+![Screenshot shows a table of N Y C Taxi rides and Public Holidays datasets result.](./media/tutorial-data-analyst/rides-public-holidays.png)
-This time, we want to highlight the number of taxi rides during public holidays. For that purpose, we choose **current_day** for the **Category** column and **rides_per_day** and **holiday_rides** as the **Legend (series)** columns.
+Highlight the number of taxi rides during public holidays. For that purpose, choose **current_day** for the **Category** column and **rides_per_day** and **holiday_rides** as the **Legend (series)** columns.
-![Number of taxi rides during public holidays plot chart](./media/tutorial-data-analyst/plot-chart-public-holidays.png)
+![Screenshot shows the number of taxi rides during public holidays as a plot chart.](./media/tutorial-data-analyst/plot-chart-public-holidays.png)
From the plot chart, you can see that during public holidays the number of taxi rides is lower. There's still one unexplained large drop on January 23. Let's check the weather in NYC on that day by querying the Weather Data dataset:
FROM
WHERE countryorregion = 'US' AND CAST([datetime] AS DATE) = '2016-01-23' AND stationname = 'JOHN F KENNEDY INTERNATIONAL AIRPORT' ```
-![Weather Data dataset result visualization](./media/tutorial-data-analyst/weather-data-set-visualization.png)
+![Screenshot shows a Weather Data dataset result visualization.](./media/tutorial-data-analyst/weather-data-set-visualization.png)
The results of the query indicate that the drop in the number of taxi rides occurred because:
The results of the query indicate that the drop in the number of taxi rides occu
- It was cold (temperature was below zero degrees Celsius). - It was windy (~10 m/s).
-This tutorial has shown how a data analyst can quickly perform exploratory data analysis, easily combine different datasets by using serverless SQL pool, and visualize the results by using Azure Synapse Studio.
+This tutorial has shown how a data analyst can quickly perform exploratory data analysis. You can combine different datasets by using serverless SQL pool and visualize the results by using Azure Synapse Studio.
## Next steps To learn how to connect serverless SQL pool to Power BI Desktop and create reports, see [Connect serverless SQL pool to Power BI Desktop and create reports](tutorial-connect-power-bi-desktop.md). To learn how to use External tables in serverless SQL pool see [Use external tables with Synapse SQL](develop-tables-external-tables.md?tabs=sql-pool)
-
virtual-machines Co Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/co-location.md
Proximity placement groups offer colocation in the same data center. However, be
- When you ask for the first virtual machine in the proximity placement group, the data center is automatically selected. In some cases, a second request for a different virtual machine SKU, may fail if it doesn't exist in that data center. In this case, an **OverconstrainedAllocationRequest** error is returned. To avoid this, try changing the order in which you deploy your SKUs or have both resources deployed using a single ARM template. - In the case of elastic workloads, where you add and remove VM instances, having a proximity placement group constraint on your deployment may result in a failure to satisfy the request resulting in **AllocationFailure** error. -- Stopping (deallocate) and starting your VMs as needed is another way to achieve elasticity. Since the capacity is not kept once you stop (deallocate) a VM, starting it again may result in an **AllocationFailure** error.
+- Stopping (deallocate) and starting your VMs as needed is another way to achieve elasticity. Since the capacity is not kept once you stop (deallocate) a VM, starting it again may result in an **AllocationFailure** error.
+- VM start and redeploy operations will continue to respect the Proximity Placement Group once sucessfully configured.
## Planned maintenance and Proximity Placement Groups
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Ephemeral disks also require that the VM size supports **Premium storage**. The
- Azure Site Recovery - OS Disk Swap
- ## Trusted Launch for Ephemeral OS disks (Preview)
+ ## Trusted Launch for Ephemeral OS disks
Ephemeral OS disks can be created with Trusted launch. Not all VM sizes and regions are supported for trusted launch. Please check [limitations of trusted launch](trusted-launch.md#limitations) for supported sizes and regions. VM guest state (VMGS) is specific to trusted launch VMs. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. While using trusted launch by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk.
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
The Key Vault VM extension logs only exist locally on the VM and are most inform
|Location|Description| |--|--| | C:\WindowsAzure\Logs\WaAppAgent.log | Shows when an update to the extension occurred. |
-| C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<most recent version\>\ | Shows the status of certificate download. The download location will always be the Windows computer's MY store (certlm.msc). |
-| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<most recent version\>\RuntimeSettings\ | The Key Vault VM Extension service logs show the status of the akvvm_service service. |
-| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<most recent version\>\Status\ | The configuration and binaries for Key Vault VM Extension service. |
+| C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\ | Shows the status of certificate download. The download location will always be the Windows computer's MY store (certlm.msc). |
+| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\RuntimeSettings\ | The Key Vault VM Extension service logs show the status of the akvvm_service service. |
+| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\Status\ | The configuration and binaries for Key Vault VM Extension service. |
|||
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
Open [the release of Azure Linux Agent in GitHub](https://github.com/Azure/WALin
For version 2.2.x or later, type: ```bash
-wget https://github.com/Azure/WALinuxAgent/archive/v2.2.x.zip
+wget https://github.com/Azure/WALinuxAgent/archive/refs/tags/v2.2.x.zip
unzip v2.2.x.zip cd WALinuxAgent-2.2.x ```
-The following line uses version 2.2.0 as an example:
+The following line uses version 2.2.14 as an example:
```bash
-wget https://github.com/Azure/WALinuxAgent/archive/v2.2.14.zip
+wget https://github.com/Azure/WALinuxAgent/archive/refs/tags/v2.2.14.zip
unzip v2.2.14.zip cd WALinuxAgent-2.2.14 ```
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
sudo reboot
2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.
- Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation] (https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
+ Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions. ```bash
virtual-machines Ssh From Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/ssh-from-windows.md
You can also create key pairs with the [Azure CLI](/cli/azure) with the [az sshk
To create an SSH key pair on your local computer using the `ssh-keygen` command from PowerShell or a command prompt, type the following: ```powershell
-ssh-keygen -m PEM -t rsa -b 4096
+ssh-keygen -m PEM -t rsa -b 2048
``` Enter a filename, or use the default shown in parenthesis (for example `C:\Users\username/.ssh/id_rsa`). Enter a passphrase for the file, or leave the passphrase blank if you do not want to use a passphrase.
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
Azure H-series virtual machines (VMs) are designed to deliver leadership-class performance, scalability, and cost efficiency for various real-world HPC workloads.
-[HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.675 GHz.
+[HBv3-series](hbv3-series.md) VMs are optimized for HPC applications such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, reservoir simulation, and RTL simulation. HBv3 VMs feature up to 120 AMD EPYCΓäó 7003-series (Milan) CPU cores, 448 GB of RAM, and no hyperthreading. HBv3-series VMs also provide 350 GB/sec of memory bandwidth, up to 32 MB of L3 cache per core, up to 7 GB/s of block device SSD performance, and clock frequencies up to 3.5 GHz.
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to enable supercomputer-scale MPI workloads. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. The HDR InfiniBand fabric also supports Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is strongly recommended.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Trusted launch now supports Azure Backup. For more information, see [Support ma
### Does trusted launch support ephemeral OS disks?
-Trusted launch now supports ephemeral OS disks in preview. Note that, while using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after the creation of the VM may not be persisted across operations like reimaging and platform events like service healing. For more information, see [Trusted Launch for Ephemeral OS disks (Preview)](https://aka.ms/ephemeral-os-disks-support-trusted-launch).
+Trusted launch supports ephemeral OS disks. Note that, while using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after the creation of the VM may not be persisted across operations like reimaging and platform events like service healing. For more information, see [Trusted Launch for Ephemeral OS disks (Preview)](https://aka.ms/ephemeral-os-disks-support-trusted-launch).
### How can I find VM sizes that support Trusted launch?
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
az sig image-definition list --resource-group myGalleryRG --gallery-name myGalle
**List image versions**
-List image versions in your gallery using [az sig image-version list](/cli/azure/sig/image-version#az_sig_image_version_list):
+List image versions in your gallery using [az sig image-version list](/cli/azure/sig/image-version#az-sig-image-version-list):
```azurecli-interactive
az sig image-version list --resource-group myGalleryRG --gallery-name myGallery
**Get a specific image version**
-Get the ID of a specific image version in your gallery using [az sig image-version show](/cli/azure/sig/image-version#az_sig_image_version_show).
+Get the ID of a specific image version in your gallery using [az sig image-version show](/cli/azure/sig/image-version#az-sig-image-version-show).
```azurecli-interactive az sig image-version show \
az sig list --query [*]."{Name:name,PublicName:sharingProfile.communityGalleryIn
> As an end user, to get the public name of a community gallery, you currently need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
-List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community).
+List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az-sig-image-definition-list-community).
In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
In this example, we list all of the images in the *ContosoImage* gallery in *Wes
--query [*]."{Name:name,ID:uniqueId,OS:osType,State:osState}" -o table ```
-List image versions shared in a community gallery using [az sig image-version list-community](/cli/azure/sig/image-version#az_sig_image_version_list_community):
+List image versions shared in a community gallery using [az sig image-version list-community](/cli/azure/sig/image-version#az-sig-image-version-list-community):
```azurecli-interactive az sig image-version list-community \
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
To create a VM using an image shared to a community gallery, use the unique ID o
As an end user, to get the public name of a community gallery, you need to use the portal. Go to **Virtual machines** > **Create** > **Azure virtual machine** > **Image** > **See all images** > **Community Images** > **Public gallery name**.
-List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az_sig_image_definition_list_community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
+List all of the image definitions that are available in a community gallery using [az sig image-definition list-community](/cli/azure/sig/image-definition#az-sig-image-definition-list-community). In this example, we list all of the images in the *ContosoImage* gallery in *West US* and by name, the unique ID that is needed to create a VM, OS and OS state.
```azurecli-interactive az sig image-definition list-community \
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
vm-linux Previously updated : 02/11/2022 Last updated : 06/06/2022 # SAP HANA infrastructure configurations and operations on Azure
-This document provides guidance for configuring Azure infrastructure and operating SAP HANA systems that are deployed on Azure native virtual machines (VMs). The document also includes configuration information for SAP HANA scale-out for the M128s VM SKU. This document is not intended to replace the standard SAP documentation, which includes the following content:
+This document provides guidance for configuring Azure infrastructure and operating SAP HANA systems that are deployed on Azure native virtual machines (VMs). The document also includes configuration information for SAP HANA scale-out for the M128s VM SKU. This document isn't intended to replace the standard SAP documentation, which includes the following content:
- [SAP administration guide](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/330e5550b09d4f0f8b6cceb14a64cd22.html) - [SAP installation guides](https://service.sap.com/instguides)
Deploy the VMs in Azure by using:
You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md) or with the automation released on [GitHub](https://github.com/AzureCAT-GSI/SAP-HANA-ARM). >[!IMPORTANT]
-> In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image. For more details, see [Memory optimized virtual machine sizes](../../mv2-series.md).
+> In order to use M208xx_v2 VMs, you need to be careful selecting your Linux image. For more information, see [Memory optimized virtual machine sizes](../../mv2-series.md).
>
For storage configurations and storage types to be used with SAP HANA in Azure,
### Set up Azure virtual networks
-When you have site-to-site connectivity into Azure via VPN or ExpressRoute, you must have at least one Azure virtual network that is connected through a Virtual Gateway to the VPN or ExpressRoute circuit. In simple deployments, the Virtual Gateway can be deployed in a subnet of the Azure virtual network (VNet) that hosts the SAP HANA instances as well. To install SAP HANA, you create two additional subnets within the Azure virtual network. One subnet hosts the VMs to run the SAP HANA instances. The other subnet runs Jumpbox or Management VMs to host SAP HANA Studio, other management software, or your application software.
+When you have site-to-site connectivity into Azure via VPN or ExpressRoute, you must have at least one Azure virtual network that is connected through a Virtual Gateway to the VPN or ExpressRoute circuit. In simple deployments, the Virtual Gateway can be deployed in a subnet of the Azure virtual network (VNet) that hosts the SAP HANA instances as well. To install SAP HANA, you create two more subnets within the Azure virtual network. One subnet hosts the VMs to run the SAP HANA instances. The other subnet runs Jumpbox or Management VMs to host SAP HANA Studio, other management software, or your application software.
> [!IMPORTANT] > Out of functionality, but more important out of performance reasons, it is not supported to configure [Azure Network Virtual Appliances](https://azure.microsoft.com/solutions/network-appliances/) in the communication path between the SAP application and the DBMS layer of a SAP NetWeaver, Hybris or S/4HANA based SAP system. The communication between the SAP application layer and the DBMS layer needs to be a direct one. The restriction does not include [Azure ASG and NSG rules](../../../virtual-network/network-security-groups-overview.md) as long as those ASG and NSG rules allow a direct communication. Further scenarios where NVAs are not supported are in communication paths between Azure VMs that represent Linux Pacemaker cluster nodes and SBD devices as described in [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications](./high-availability-guide-suse.md). Or in communication paths between Azure VMs and Windows Server SOFS set up as described in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a file share in Azure](./sap-high-availability-guide-wsfc-file-share.md). NVAs in communication paths can easily double the network latency between two communication partners, can restrict throughput in critical paths between the SAP application layer and the DBMS layer. In some scenarios observed with customers, NVAs can cause Pacemaker Linux clusters to fail in cases where communications between the Linux Pacemaker cluster nodes need to communicate to their SBD device through an NVA.
When you install the VMs to run SAP HANA, the VMs need:
> >
-However, for deployments that are enduring, you need to create a virtual datacenter network architecture in Azure. This architecture recommends the separation of the Azure VNet Gateway that connects to on-premises into a separate Azure VNet. This separate VNet should host all the traffic that leaves either to on-premises or to the internet. This approach allows you to deploy software for auditing and logging traffic that enters the virtual datacenter in Azure in this separate hub VNet. So you have one VNet that hosts all the software and configurations that relates to in- and outgoing traffic to your Azure deployment.
+However, for deployments that are enduring, you need to create a virtual datacenter network architecture in Azure. This architecture recommends the separation of the Azure VNet Gateway that connects to on-premises into a separate Azure VNet. This separate VNet should host all the traffic that leaves either to on-premises or to the internet. This approach allows you to deploy software for auditing and logging traffic that enters the virtual datacenter in Azure in this separate hub VNet. So you have one VNet that hosts all the software and configurations that relate to in- and outgoing traffic to your Azure deployment.
The articles [Azure Virtual Datacenter: A Network Perspective](/azure/architecture/vdc/networking-virtual-datacenter) and [Azure Virtual Datacenter and the Enterprise Control Plane](/azure/architecture/vdc/) give more information on the virtual datacenter approach and related Azure VNet design.
For VMs running SAP HANA, you should work with static IP addresses assigned. Rea
[Azure Network Security Groups (NSGs)](../../../virtual-network/virtual-network-vnet-plan-design-arm.md) are used to direct traffic that's routed to the SAP HANA instance or the jumpbox. The NSGs and eventually [Application Security Groups](../../../virtual-network/network-security-groups-overview.md#application-security-groups) are associated to the SAP HANA subnet and the Management subnet.
-The following image shows an overview of a rough deployment schema for SAP HANA following a hub and spoke VNet architecture:
-
-![Rough deployment schema for SAP HANA](media/hana-vm-operations/hana-simple-networking-dmz.png)
- To deploy SAP HANA in Azure without a site-to-site connection, you still want to shield the SAP HANA instance from the public internet and hide it behind a forward proxy. In this basic scenario, the deployment relies on Azure built-in DNS services to resolve hostnames. In a more complex deployment where public-facing IP addresses are used, Azure built-in DNS services are especially important. Use Azure NSGs and [Azure NVAs](https://azure.microsoft.com/solutions/network-appliances/) to control, monitor the routing from the internet into your Azure VNet architecture in Azure. The following image shows a rough schema for deploying SAP HANA without a site-to-site connection in a hub and spoke VNet architecture: ![Rough deployment schema for SAP HANA without a site-to-site connection](media/hana-vm-operations/hana-simple-networking-dmz.png)
Another description on how to use Azure NVAs to control and monitor access from
## Configuring Azure infrastructure for SAP HANA scale-out
-In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). A checkmark in the column 'Clustering' indicates scale-out support. Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in scale-out for each of the VMs, check the details of the entries in the particular VM SKU listed in the SAP HANA hardware directory.
-The minimum OS releases for deploying scale-out configurations in Azure VMs, check the details of the entries in the particular VM SKU listed in the SAP HANA hardware directory. Of a n-node OLAP scale-out configuration, one node functions as master node. The other nodes up to the limit of the certification act as worker node. Additional standby nodes don't count into the number of certified nodes
+In order to find out the Azure VM types that are certified for either OLAP scale-out or S/4HANA scale-out, check the [SAP HANA hardware directory](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure). A checkmark in the column 'Clustering' indicates scale-out support. Application type indicates whether OLAP scale-out or S/4HANA scale-out is supported. For details on nodes certified in scale-out, review the entry for a specific VM SKU listed in the SAP HANA hardware directory.
+
+The minimum OS releases for deploying scale-out configurations in Azure VMs, check the details of the entries in the particular VM SKU listed in the SAP HANA hardware directory. Of a n-node OLAP scale-out configuration, one node functions as the main node. The other nodes up to the limit of the certification act as worker node. More standby nodes don't count into the number of certified nodes
>[!NOTE] > Azure VM scale-out deployments of SAP HANA with standby node are only possible using the [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) storage. No other SAP HANA certified Azure storage allows the configuration of SAP HANA standby nodes
A typical basic design for a single node in a scale-out configuration is going t
The basic configuration of a VM node for SAP HANA scale-out looks like: - For **/hana/shared**, you use the native NFS service provided through Azure NetApp Files. -- All other disk volumes are not shared among the different nodes and are not based on NFS. Installation configurations and steps for scale-out HANA installations with non-shared **/han).
+- All other disk volumes aren't shared among the different nodes and aren't based on NFS. Installation configurations and steps for scale-out HANA installations with non-shared **/han).
Sizing the volumes or disks, you need to check the document [SAP HANA TDI Storage Requirements](https://archive.sap.com/kmuuid2/70c8e423-c8aa-3210-3fae-e043f5c1ca92/SAP%20HANA%20TDI%20-%20Storage%20Requirements.pdf), for the size required dependent on the number of worker nodes. The document releases a formula you need to apply to get the required capacity of the volume The other design criteria that is displayed in the graphics of the single node configuration for a scale-out SAP HANA VM is the VNet, or better the subnet configuration. SAP highly recommends a separation of the client/application facing traffic from the communications between the HANA nodes. As shown in the graphics, this goal is achieved by having two different vNICs attached to the VM. Both vNICs are in different subnets, have two different IP addresses. You then control the flow of traffic with routing rules using NSGs or user-defined routes.
-Particularly in Azure, there are no means and methods to enforce quality of service and quotas on specific vNICs. As a result, the separation of client/application facing and intra-node communication does not open any opportunities to prioritize one traffic stream over the other. Instead the separation remains a measure of security in shielding the intra-node communications of the scale-out configurations.
+Particularly in Azure, there are no means and methods to enforce quality of service and quotas on specific vNICs. As a result, the separation of client/application facing and intra-node communication doesn't open any opportunities to prioritize one traffic stream over the other. Instead the separation remains a measure of security in shielding the intra-node communications of the scale-out configurations.
>[!NOTE] >SAP recommends separating network traffic to the client/application side and intra-node traffic as described in this document. Therefore putting an architecture in place as shown in the last graphics is recommended. Also consult your security and compliance team for requirements that deviate from the recommendation
Installing a scale-out SAP configuration, you need to perform rough steps of:
- Deploying new or adapting an existing Azure VNet infrastructure - Deploying the new VMs using Azure Managed Premium Storage, Ultra disk volumes, and/or NFS volumes based on ANF--- Install the SAP HANA master node.-- Adapt configuration parameters of the SAP HANA master node+
+- Install the SAP HANA main node.
+- Adapt configuration parameters of the SAP HANA main node
- Continue with the installation of the SAP HANA worker nodes #### Installation of SAP HANA in scale-out configuration As your Azure VM infrastructure is deployed, and all other preparations are done, you need to install the SAP HANA scale-out configurations in these steps: -- Install the SAP HANA master node according to SAP's documentation-- In case of using Azure Premium Storage or Ultra disk storage with non-shared disks of /hana/data and /hana/log, you need to change the global.ini file and add the parameter 'basepath_shared = no' to the global.ini file. This parameter enables SAP HANA to run in scale-out without 'shared' **/hana/data** and **/hana/log** volumes between the nodes. Details are documented in [SAP Note #2080991](https://launchpad.support.sap.com/#/notes/2080991). If you are using NFS volumes based on ANF for /hana/data and /hana/log, you don't need to make this change
+- Install the SAP HANA main node according to SAP's documentation
+- When using Azure Premium Storage or Ultra disk storage with non-shared disks of `/hana/data` and `/hana/log`, add the parameter `basepath_shared = no` to the `global.ini` file. This parameter enables SAP HANA to run in scale-out without shared `/hana/data` and `/hana/log` volumes between the nodes. Details are documented in [SAP Note #2080991](https://launchpad.support.sap.com/#/notes/2080991). If you're using NFS volumes based on ANF for /hana/data and /hana/log, you don't need to make this change
- After the eventual change in the global.ini parameter, restart the SAP HANA instance-- Add additional worker nodes. See also <https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/0d9fe701e2214e98ad4f8721f6558c34.html>. Specify the internal network for SAP HANA inter-node communication during the installation or afterwards using, for example, the local hdblcm. For more detailed documentation, see also [SAP Note #2183363](https://launchpad.support.sap.com/#/notes/2183363).
+- Add more worker nodes. See also <https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/0d9fe701e2214e98ad4f8721f6558c34.html>. Specify the internal network for SAP HANA inter-node communication during the installation or afterwards using, for example, the local hdblcm. For more detailed documentation, see also [SAP Note #2183363](https://launchpad.support.sap.com/#/notes/2183363).
-Details to set up an SAP HANA scale-out system with standby node on SUSE Linux is described in detail in [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md). Equivalent documentation for Red Hat can be found in the article [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md).
+To set up an SAP HANA scale-out system with a standby node, see the [SUSE Linux deployment instructions](./sap-hana-scale-out-standby-netapp-files-suse.md) or the [Red Hat deployment instructions](./sap-hana-scale-out-standby-netapp-files-rhel.md).
## SAP HANA Dynamic Tiering 2.0 for Azure virtual machines
-In addition to the SAP HANA certifications on Azure M-series VMs, SAP HANA Dynamic Tiering 2.0 is also supported on Microsoft Azure
-(see SAP HANA Dynamic Tiering documentation links further down). While there is no difference in installing the product or
-operating it, for example, via SAP HANA Cockpit inside an Azure Virtual Machine, there are a few important items, which are mandatory for official support on Azure. These key points are described below. Throughout the article, the abbreviation "DT 2.0" is going to be used instead of the full name Dynamic Tiering 2.0.
+In addition to the SAP HANA certifications on Azure M-series VMs, SAP HANA Dynamic Tiering 2.0 is also supported on Microsoft Azure. For more information, see [Links to DT 2.0 documentation](#links-to-dt-20-documentation). There's no difference in installing or operating the product. For example, you can install SAP HANA Cockpit inside an Azure VM. However, there are some mandatory requirements, as described in the following section, for official support on Azure. Throughout the article, the abbreviation "DT 2.0" is going to be used instead of the full name Dynamic Tiering 2.0.
SAP HANA Dynamic Tiering 2.0 isn't supported by SAP BW or S4HANA. Main use cases right now are native HANA applications. ### Overview
-The picture below gives an overview regarding DT 2.0 support on Microsoft Azure. There is a set of mandatory requirements, which
+The picture below gives an overview regarding DT 2.0 support on Microsoft Azure. There's a set of mandatory requirements, which
has to be followed to comply with the official certification: - DT 2.0 must be installed on a dedicated Azure VM. It may not run on the same VM where SAP HANA runs
More details are going to be explained in the following sections.
### Dedicated Azure VM for SAP HANA DT 2.0
-On Azure IaaS, DT 2.0 is only supported on a dedicated VM. It is not allowed to run DT 2.0 on the same Azure VM where the HANA
+On Azure IaaS, DT 2.0 is only supported on a dedicated VM. It isn't allowed to run DT 2.0 on the same Azure VM where the HANA
instance is running. Initially two VM types can be used to run SAP HANA DT 2.0: - M64-32ms
instance is running. Initially two VM types can be used to run SAP HANA DT 2.0:
For more information on the VM type description, see [Azure VM sizes - Memory](../../sizes-memory.md) Given the basic idea of DT 2.0, which is about offloading "warm" data in order to save costs it makes sense to use corresponding
-VM sizes. There is no strict rule though regarding the possible combinations. It depends on the specific customer workload.
+VM sizes. There's no strict rule though regarding the possible combinations. It depends on the specific customer workload.
Recommended configurations would be:
See additional information about Azure accelerated networking [Create an Azure V
### VM Storage for SAP HANA DT 2.0
-According to DT 2.0 best practice guidance, the disk IO throughput should be minimum 50 MB/sec per physical core. Looking at the spec for the two
-Azure VM types, which are supported for DT 2.0 the maximum disk IO throughput limit for the VM look like:
+According to DT 2.0 best practice guidance, the disk IO throughput should be minimum 50 MB/sec per physical core.
-- E32sv3 : 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical core-- M64-32ms : 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per physical core
+According to the specifications for the two Azure VM types, which are supported for DT 2.0, the maximum disk IO throughput limit for the VM looks like:
-It is required to attach multiple Azure disks to the DT 2.0 VM and create a software raid (striping) on OS level to achieve the max limit of disk throughput
-per VM. A single Azure disk cannot provide the throughput to reach the max VM limit in this regard. Azure Premium storage is mandatory to run DT 2.0.
+- E32sv3: 768 MB/sec (uncached) which means a ratio of 48 MB/sec per physical core
+- M64-32ms: 1000 MB/sec (uncached) which means a ratio of 62.5 MB/sec per physical core
+
+It's required to attach multiple Azure disks to the DT 2.0 VM and create a software raid (striping) on OS level to achieve the max limit of disk throughput
+per VM. A single Azure disk can't provide the throughput to reach the max VM limit in this regard. Azure Premium storage is mandatory to run DT 2.0.
- Details about available Azure disk types can be found on the [Select a disk type for Azure IaaS VMs - managed disks](../../disks-types.md) page - Details about creating software raid via mdadm can be found on the [Configure software RAID on a Linux VM](/previous-versions/azure/virtual-machines/linux/configure-raid) page
Especially in case the workload is read-intense it could boost IO performance to
data volumes of database software. Whereas for the transaction log Azure host disk cache must be "none". Regarding the size of the log volume a recommended starting point is a heuristic of 15% of the data size. The creation of the log volume can be accomplished by using different
-Azure disk types depending on cost and throughput requirements. For the log volume, high I/O throughput is required. In case of using the VM type M64-32ms it is
-mandatory to enable [Write Accelerator](../../how-to-enable-write-accelerator.md). Azure Write Accelerator provides optimal disk write latency for the transaction
+Azure disk types depending on cost and throughput requirements. For the log volume, high I/O throughput is required.
+
+When using the VM type M64-32ms, it's mandatory to enable [Write Accelerator](../../how-to-enable-write-accelerator.md). Azure Write Accelerator provides optimal disk write latency for the transaction
log (only available for M-series). There are some items to consider though like the maximum number of disks per VM type. Details about Write Accelerator can be found on the [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) page
Here are a few examples about sizing the log volume:
Like for SAP HANA scale-out, the /hana/shared directory has to be shared between the SAP HANA VM and the DT 2.0 VM. The same architecture as for SAP HANA scale-out using dedicated VMs, which act as a highly available NFS server is recommended. In order to provide a shared backup volume,
-the identical design can be used. But it is up to the customer if HA would be necessary or if it is sufficient to just use a dedicated VM with
+the identical design can be used. But it's up to the customer if HA would be necessary or if it's sufficient to just use a dedicated VM with
enough storage capacity to act as a backup server.
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
The STONITH device uses a Service Principal to authorize against Microsoft Azure
### **[1]** Create a custom role for the fence agent
-The Service Principal does not have permissions to access your Azure resources by default. You need to give the Service Principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/role-assignments-powershell.md) or [Azure CLI](../../../role-based-access-control/role-assignments-cli.md)
+The Service Principal does not have permissions to access your Azure resources by default. You need to give the Service Principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md) or [Azure CLI](../../../role-based-access-control/custom-roles-cli.md)
Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
virtual-machines Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
+4. **[A]** Adjust the sunrpc settings for NFSv3 volumes, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
virtual-wan About Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing-preference.md
+
+ Title: 'Virtual WAN virtual hub routing preference - Preview'
+
+description: Learn about Virtual WAN Virtual virtual hub routing preference.
+++ Last updated : 05/31/2022++
+# Virtual hub routing preference (Preview)
+
+A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables.
+
+The virtual hub router takes routing decisions using built-in route selection algorithm. To influence routing decisions in virtual hub router towards on-premises, we now have a new Virtual WAN hub feature called **Hub routing preference** (HRP). When a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections for a destination route-prefix in on-premises, the virtual hub routerΓÇÖs route selection algorithm will adapt based on the hub routing preference configuration and selects the best routes. You can now configure **Hub routing preference** using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home).
+
+> [!IMPORTANT]
+> The Virtual WAN feature **Hub routing preference** is currently in public preview. If you are interested in trying this feature, please follow the documentation below.
+This public preview is provided without a service-level agreement and shouldn't be used for production workloads. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+
+## <a name="selection"></a>Route selection algorithm for virtual hub
+
+This section explains the route selection algorithm in a virtual hub along with the control provided by HRP. When a virtual hub has multiple routes to a destination prefix for on-premises, the best route or routes are selected in the order of preference as follows:
+
+1. Select routes with Longest Prefix Match (LPM).
+
+1. Prefer static routes over BGP routes.
+
+1. Select best path based on the HRP configuration. There are three possible configurations for HRP and the route preference changes accordingly.
+
+ * **ExpressRoute** (This is the default setting.)
+
+ 1. Prefer routes from connections local to a virtual hub over routes learned from remote hub.
+ 1. If there are still routes from both ER and S2S VPN connections, then see below. Else proceed to the next rule.
+ * If all the routes are local to the hub, then choose routes learned from ER connections because HRP is set to ER.
+ * If all the routes are through remote hubs, then choose route from S2S VPN connection over ER connections because any transit between ER to ER is supported only if the circuits have ER Global Reach enabled and an Azure Firewall or NVA is provisioned inside the virtual hub.
+ 1. Then, prefer the routes with the shortest BGP AS-Path length.
+
+ * **VPN**
+
+ 1. Prefer routes from connections local to a virtual hub over routes learned from remote hub.
+ 1. If there are routes from both ER and S2S VPN connections, then choose S2S VPN routes.
+ 1. Then, prefer the routes with the shortest BGP AS-Path length.
+
+ * **AS Path**
+
+ 1. Prefer routes with the shortest BGP AS-Path length irrespective of the source of the route advertisements. For example, whether the routes are learned from on-premises connected via S2S VPN or ER.
+ 1. Prefer routes from connections local to the virtual hub over routes learned from remote hub.
+ 1. If there are routes from both ER and S2S VPN connections, then see below. Else proceed to the next rule.
+ * If all the routes are local to the virtual hub, then choose routes from ER connections.
+ * If all the routes are through remote virtual hubs, then choose routes from S2S VPN connections.
+
+1. If there are still multiple routes, load-balance across all paths using equal-cost multi-path (ECMP) routing.
+
+**Things to note:**
+
+* When there are multiple virtual hubs in a Virtual WAN scenario, a virtual hub selects the best routes using the route selection algorithm described above, and then advertises them to the other virtual hubs in the virtual WAN.
+
+* **Limitation:** If a route-prefix is reachable via ER or VPN connections, and via virtual hub SD-WAN NVA, then the latter route is ignored by the route-selection algorithm. Therefore, the flows to prefixes reachable only via virtual hub SD-WAN NVA will ever take the route through the NVA. This is a limitation during the Preview phase of the **Hub routing preference** feature.
+
+## Routing scenarios
+
+Virtual WAN hub routing preference is beneficial when multiple on-premises are advertising routes to same destination prefixes, which can happen in customer Virtual WAN scenarios that use any of the following setups.
+
+* Virtual WAN hub using ER connections as primary and VPN connections as back-up.
+* Virtual WAN with connections to multiple on-premises and customer is using one on-premises site as active, and another as standby for a service deployed using the same IP address ranges in both the sites.
+* Virtual WAN has both VPN and ER connections simultaneously and the customer is distributing services across connections by controlling route advertisements from on-premises.
+
+The example below is a hypothetical Virtual WAN deployment that encompasses multiple scenarios described above. We'll use it to demonstrate the route selection by a virtual hub.
+
+A brief overview of the setup:
+
+* Each on-premises site is connected to one or more of the virtual hubs Hub_1 or Hub_2 using S2S VPN, or ER circuit, or SD-WAN NVA connections.
+* For each on-premises site, the ASN it uses and the route-prefixes it advertises are listed in the diagram. Notice that there are multiple routes for several route-prefixes.
+
+ :::image type="content" source="./media/about-virtual-hub-routing-preference/diagram.png" alt-text="Example diagram for hub-route-preference scenario." lightbox="./media/about-virtual-hub-routing-preference/diagram.png":::
+
+LetΓÇÖs say there are flows from a virtual network VNET1 connected to Hub_1 to various destination route-prefixes advertised by the on-premises. The path that each of those flows takes for different configurations of Virtual WAN **hub routing preference** on Hub_1 and Hub_2 is described in the tables below. The paths have been labeled in the diagram and referred to in the tables below for ease of understanding.
+
+**When only local routes are available:**
+
+| Flow destination route-prefix | HRP of Hub_1 | HRP of Hub_2 | Path used by flow | All possible paths | Explanation |
+| | | | | ||
+| 10.61.1.5 | AS Path | N/A | 4 | 1,2,3,4 | Paths 1, 4 and 5 have the shortest AS Path but ER takes precedence over VPN, so path 4 is chosen. |
+| 10.61.1.5 | VPN | N/A | 1 | 1,2,3,4 | VPN route is preferred over ER, so paths 1 and 2 are preferred, but path 1 has the shorter AS Path. |
+| 10.61.1.5 | ER | N/A | 4 | 1,2,3,4 | ER routes 3 and 4 are selected, but path 4 has the shorter AS Path. |
+
+**When only remote routes are available:**
+
+| Flow destination route-prefix | HRP of Hub_1 | HRP of Hub_2 | Path used by flow | All possible paths | Explanation |
+| | | | | ||
+| 10.62.1.5 | Any setting | AS Path or ER | ECMP across 9 & 10 | 7,8,9,10,11 | All available paths are remote and have equal AS Path, so ER paths 9 and 10 are chosen and advertised by Hub_2. Hub_1ΓÇÖs HRP setting has no impact. |
+| 10.62.1.5 | Any setting | VPN | ECMP across 7 & 8 | 7,8,9,10,11 | The Hub_2 will only advertise best routes 7 & 8 and they're only choices for Hub_1, so Hub_1ΓÇÖs HRP setting has no impact. |
+
+**When local and remote routes are available:**
+
+| Flow destination route-prefix | HRP of Hub_1 | HRP of Hub_2 | Path used by flow | All possible paths | Explanation |
+| | | | | ||
+| 10.50.2.5  | Any setting | Any setting | 1 | 1,2,3,4,7,8,9,10,11 | Hub_2 will advertise only 7 due to LPM. Hub_1 selects 1 due to LPM and being local route. |
+| 10.50.1.5 | AS Path or ER | Any setting | 4 | 1,2,3,4,7,8,9,10,11 | Hub_2 will advertise different routes based on its HRP setting, but Hub_1 will select 4 due to being local, ER route with the shortest AS Path. |
+| 10.50.1.5 | VPN | Any setting | 1 | 1,2,3,4,7,8,9,10,11 | Hub_2 will advertise different routes based on its HRP setting, but Hub_1 will select 1 due to being local, VPN route with the shortest AS Path. |
+| 10.55.2.5 | AS Path | AS Path or ER | 9 | 2,3,8,9 | Hub_2 will only advertise 9, because 8 and 9 have same AS Path but 9 is ER route. On Hub_1, among 2, 3 and 9 routes, it selects 9 due to having the shortest AS Path. |
+| 10.55.2.5 | AS Path | VPN | 8 | 2,3,8,9 | Hub_2 will only advertise 8, because 8 and 9 have same AS Path but 8 is VPN route. On Hub_1, among 2, 3 and 8 routes, it selects 8 due to having the shortest AS Path. |
+| 10.55.2.5 | ER | Any setting | 3 | 2,3,8,9 | Hub_2 will advertise different routes based on its HRP setting, but Hub_1 will select 3 due to being local and ER. |
+| 10.55.2.5 | VPN | Any setting | 2 | 2,3,8,9 | Hub_2 will advertise different routes based on its HRP setting, but Hub_1 will select 2 due to being local and VPN. |
+
+**Key takeaways:**
+
+* To prefer remote routes over local routes on a virtual hub, set its hub routing preference to AS Path and increase the AS Path length of the local routes.
+
+## Next steps
+
+* To use virtual hub routing preference, see [How to configure virtual hub routing preference](howto-virtual-hub-routing-preference.md).
virtual-wan Howto Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md
+
+ Title: 'Configure virtual hub routing preference - Preview'
+
+description: Learn how to configure Virtual WAN virtual hub routing preference.
+++ Last updated : 05/30/2022++
+# Configure virtual hub routing preference (Preview)
+
+The following steps help you configure virtual hub routing preference settings. For information about this feature, see [Virtual hub routing preference](about-virtual-hub-routing-preference.md).
+
+> [!IMPORTANT]
+> The Virtual WAN feature **Hub routing preference** is currently in public preview. If you are interested in trying this feature, please follow the documentation below.
+This public preview is provided without a service-level agreement and shouldn't be used for production workloads. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+
+## Configure
+
+You can configure a new virtual hub to include the virtual hub routing preference setting by using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home). Follow the steps in the [Tutorial: Create a site-to-site connection](virtual-wan-site-to-site-portal.md) article.
+
+To configure virtual hub routing preference for an existing virtual hub, use the following steps.
+
+1. Open the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home). You can't use the regular Azure portal yet for this feature.
+
+1. Go to your virtual WAN. In the left pane, under the **Connectivity** section, click **Hubs** to view the list of hubs. Select **… > Edit virtual hub** to open the **Edit virtual hub** dialog box.
+
+ :::image type="content" source="./media/howto-virtual-hub-routing-preference/edit-virtual-hub.png" alt-text="Screenshot shows select Edit virtual hub." lightbox="./media/howto-virtual-hub-routing-preference/edit-virtual-hub-expand.png":::
+
+ You can also click on the hub to open the virtual hub, and then under virtual hub resource, click the **Edit virtual hub** button.
+
+ :::image type="content" source="./media/howto-virtual-hub-routing-preference/hub-edit.png" alt-text="Screenshot shows Edit virtual hub." lightbox="./media/howto-virtual-hub-routing-preference/hub-edit.png":::
+
+1. On the **Edit virtual hub** page, select from the dropdown to configure the field **Hub routing preference**. To determine the setting to use, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
+
+ Click **Confirm** to save the settings.
+
+ :::image type="content" source="./media/howto-virtual-hub-routing-preference/select-preference.png" alt-text="Screenshot shows the dropdown showing ExpressRoute, VPN, and AS PATH." lightbox="./media/howto-virtual-hub-routing-preference/select-preference.png":::
+
+1. After the settings have saved, you can verify the configuration on the **Overview** page for the virtual hub.
+
+ :::image type="content" source="./media/howto-virtual-hub-routing-preference/view-preference.png" alt-text="Screenshot shows virtual hub Overview page with routing preference." lightbox="./media/howto-virtual-hub-routing-preference/view-preference-expand.png":::
+
+## Next steps
+
+To learn more about virtual hub routing preference, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
description: This article answers common questions about virtual hub settings an
Previously updated : 05/20/2022 Last updated : 05/30/2022
For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft
| 49 | 49 | 49000 | | 50 | 50 | 50000 |
+## <a name="routing-preference"></a>Virtual hub routing preference (Preview)
+
+A Virtual WAN virtual hub connects to virtual networks (VNets) and on-premises sites using connectivity gateways, such as site-to-site (S2S) VPN gateway, ExpressRoute (ER) gateway, point-to-site (P2S) gateway, and SD-WAN Network Virtual Appliance (NVA). The virtual hub router provides central route management and enables advanced routing scenarios using route propagation, route association, and custom route tables. When a virtual hub router makes routing decisions, it considers the configuration of such capabilities.
+
+Previously, there wasn't a configuration option for you to use to influence routing decisions within virtual hub router for prefixes in on-premises sites. These decisions relied on the virtual hub router's built-in route selection algorithm and the options available within gateways to manage routes before they reach the virtual hub router. To influence routing decisions in virtual hub router for prefixes in on-premises sites, you can now adjust the **Hub routing preference** using the [Azure Preview portal](https://portal.azure.com/?feature.customRouterAsn=true&feature.virtualWanRoutingPreference=true#home).
+
+For more information, see [About virtual hub routing preference](about-virtual-hub-routing-preference.md).
+ ## <a name="gateway"></a>Gateway settings Each virtual hub can contain multiple gateways (site-to-site, point-to-site User VPN, and ExpressRoute). When you create your virtual hub, you can configure gateways at the same time, or create an empty virtual hub and add the gateway settings later. When you edit a virtual hub, you'll see settings that pertain to gateways. For example, gateway scale units.
virtual-wan Virtual Wan Expressroute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-portal.md
Verify that you have met the following criteria before beginning your configurat
## <a name="hub"></a>Create a virtual hub and gateway
-A virtual hub is a virtual network that is created and used by Virtual WAN. It can contain various gateways, such as VPN and ExpressRoute. In this section, you will create an ExpressRoute gateway for your virtual hub. You can either create the gateway when you [create a new virtual hub](#newhub), or you can create the gateway in an [existing hub](#existinghub) by editing it.
+A virtual hub is a virtual network that is created and used by Virtual WAN. It can contain various gateways, such as VPN and ExpressRoute. In this section, you will create an ExpressRoute gateway for your virtual hub. You can either create the gateway when you [create a new virtual hub](#newhub), or you can create the gateway in an [existing hub](#existinghub) by editing it.
ExpressRoute gateways are provisioned in units of 2 Gbps. 1 scale unit = 2 Gbps with support up to 10 scale units = 20 Gbps. It takes about 30 minutes for a virtual hub and gateway to fully create.
Create a new virtual hub. Once a hub is created, you'll be charged for the hub,
### <a name="existinghub"></a>To create a gateway in an existing hub
-You can also create a gateway in an existing hub by editing it.
+You can also create a gateway in an existing hub by editing the hub.
-1. Navigate to the virtual hub that you want to edit and select it.
-2. On the **Edit virtual hub** page, select the checkbox **Include ExpressRoute gateway**.
-3. Select **Confirm** to confirm your changes. It takes about 30 minutes for the hub and hub resources to fully create.
-
- :::image type="content" source="./media/virtual-wan-expressroute-portal/edithub.png" alt-text="Screenshot shows editing an existing hub." border="false":::
+1. Go to the virtual WAN.
+1. In the left pane, select **Hubs**.
+1. On the **Virtual WAN | Hubs** page, click the hub that you want to edit.
+1. On the **Virtual HUB** page, at the top of the page, click **Edit virtual hub**.
+1. On the **Edit virtual hub** page, select the checkbox **Include ExpressRoute gateway** and adjust any other settings that you require.
+1. Select **Confirm** to confirm your changes. It takes about 30 minutes for the hub and hub resources to fully create.
### To view a gateway
vpn-gateway Vpn Gateway Bgp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-overview.md
Title: 'About BGP with VPN Gateway' description: Learn about Border Gateway Protocol (BGP) in Azure VPN, the standard internet protocol to exchange routing and reachability information between networks.- -- Previously updated : 09/02/2020 Last updated : 05/18/2022 - # About BGP with Azure VPN Gateway+ This article provides an overview of BGP (Border Gateway Protocol) support in Azure VPN Gateway.
-BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
+BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. When used in the context of Azure Virtual Networks, BGP enables the Azure VPN Gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
## <a name="why"></a>Why use BGP?+ BGP is an optional feature you can use with Azure Route-Based VPN gateways. You should also make sure your on-premises VPN devices support BGP before you enable the feature. You can continue to use Azure VPN gateways and your on-premises VPN devices without BGP. It is the equivalent of using static routes (without BGP) *vs.* using dynamic routing with BGP between your networks and Azure. There are several advantages and new capabilities with BGP: ### <a name="prefix"></a>Support automatic and flexible prefix updates+ With BGP, you only need to declare a minimum prefix to a specific BGP peer over the IPsec S2S VPN tunnel. It can be as small as a host prefix (/32) of the BGP peer IP address of your on-premises VPN device. You can control which on-premises network prefixes you want to advertise to Azure to allow your Azure Virtual Network to access. You can also advertise larger prefixes that may include some of your VNet address prefixes, such as a large private IP address space (for example, 10.0.0.0/8). Note though the prefixes cannot be identical with any one of your VNet prefixes. Those routes identical to your VNet prefixes will be rejected. ### <a name="multitunnel"></a>Support multiple tunnels between a VNet and an on-premises site with automatic failover based on BGP+ You can establish multiple connections between your Azure VNet and your on-premises VPN devices in the same location. This capability provides multiple tunnels (paths) between the two networks in an active-active configuration. If one of the tunnels is disconnected, the corresponding routes will be withdrawn via BGP and the traffic automatically shifts to the remaining tunnels. The following diagram shows a simple example of this highly available setup:
The following diagram shows a simple example of this highly available setup:
![Multiple active paths](./media/vpn-gateway-bgp-overview/multiple-active-tunnels.png) ### <a name="transitrouting"></a>Support transit routing between your on-premises networks and multiple Azure VNets+ BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks. The following diagram shows an example of a multi-hop topology with multiple paths that can transit traffic between the two on-premises networks through Azure VPN gateways within the Microsoft Networks:
The following diagram shows an example of a multi-hop topology with multiple pat
![Multi-hop transit](./media/vpn-gateway-bgp-overview/full-mesh-transit.png) ## <a name="faq"></a>BGP FAQ+ [!INCLUDE [vpn-gateway-faq-bgp-include](../../includes/vpn-gateway-faq-bgp-include.md)] ## Next steps
-See [Getting started with BGP on Azure VPN gateways](vpn-gateway-bgp-resource-manager-ps.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.
+See [Getting started with BGP on Azure VPN gateways](vpn-gateway-bgp-resource-manager-ps.md) for steps to configure BGP for your cross-premises and VNet-to-VNet connections.