Updates from: 04/04/2022 01:05:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
With the Microsoft identity platform endpoint, you can ignore the static permiss
Allowing an app to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your app requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the app over time. > [!IMPORTANT]
-> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent, since the admin consent experience doesn't know about those permissions at consent time. If you require admin privileged permissions or if your app uses dynamic consent, you must register all of the permissions in the Azure portal (not just the subset of permissions that require admin consent). This enables tenant admins to consent on behalf of all their users.
+> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent. The admin consent experience in the **App registrations** and **Enterprise applications** blades in the portal doesn't know about those dynamic permissions at consent time. We recommend that a developer list all the admin privileged permissions that are needed by the app in the portal. This enables tenant admins to consent on behalf of all their users in the portal, once. Users won't need to go through the consent experience for those permissions on sign in. The alternative is to use dynamic consent for those permissions. To grant admin consent, an individual admin signs in to the app, triggers a consent prompt for the appropriate permissions, and selects **consent for my entire org** in the consent dialogue.
### Admin consent [Admin consent](#using-the-admin-consent-endpoint) is required when your app needs access to certain high-privilege permissions. Admin consent ensures that administrators have some additional controls before authorizing apps or users to access highly privileged data from the organization.
-[Admin consent done on behalf of an organization](#requesting-consent-for-an-entire-tenant) still requires the static permissions registered for the app. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. This reduces the cycles required by the organization admin to set up the application.
+[Admin consent done on behalf of an organization](#requesting-consent-for-an-entire-tenant) is highly recommended if your app has an enterprise audience. Admin consent done on behalf of an organization requires the static permissions to be registered for the app in the portal. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. The admin can consent to those permissions on behalf of all users in the org, once. The users will not need to go through the consent experience for those permissions when signing in to the app. This is easier for users and reduces the cycles required by the organization admin to set up the application.
## Requesting individual user consent
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 03/01/2022 Last updated : 04/01/2022 + # Login to Windows virtual machine in Azure using Azure Active Directory authentication
-Organizations can now improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (AD) authentication. You can now use Azure AD as a core authentication platform to RDP into a **Windows Server 2019 Datacenter edition** and later or **Windows 10 1809** and later. Additionally, you will be able to centrally control and enforce Azure RBAC and Conditional Access policies that allow or deny access to the VMs. This article shows you how to create and configure a Windows VM and login with Azure AD based authentication.
+Organizations can now improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (AD) authentication. You can now use Azure AD as a core authentication platform to RDP into a **Windows Server 2019 Datacenter edition** and later or **Windows 10 1809** and later. You can then centrally control and enforce Azure RBAC and Conditional Access policies that allow or deny access to the VMs. This article shows you how to create and configure a Windows VM and login with Azure AD based authentication.
There are many security benefits of using Azure AD based authentication to login to Windows VMs in Azure, including:-- Use your corporate Azure AD credentials to login to Windows VMs in Azure.-- Reduce your reliance on local administrator accounts, you do not need to worry about credential loss/theft, users configuring weak credentials etc.-- Password complexity and password lifetime policies configured for your Azure AD directory help secure Windows VMs as well.-- With Azure role-based access control (Azure RBAC), specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.-- With Conditional Access, configure policies to require multi-factor authentication and other signals such as low user and sign in risk before you can RDP to Windows VMs. -- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag use of no approved local account on the VMs.-- Login to Windows VMs with Azure Active Directory also works for customers that use Federation Services.-- Automate and scale Azure AD join with MDM auto enrollment with Intune of Azure Windows VMs that are part for your VDI deployments. Auto MDM enrollment requires Azure AD P1 license. Windows Server 2019 VMs do not support MDM enrollment.
+- Use Azure AD credentials to login to Windows VMs in Azure.
+ - Federated and Managed domain users.
+- Reduce reliance on local administrator accounts.
+- Password complexity and password lifetime policies configured for your Azure AD help secure Windows VMs as well.
+- With Azure role-based access control (Azure RBAC)
+ - Specify who can login to a VM as a regular user or with administrator privileges.
+ - When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate.
+ - When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
+- Configure Conditional Access policies to require multi-factor authentication and other signals such as user or sign in risk before you can RDP to Windows VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag use of unapproved local accounts on the VMs.
+- Automate and scale Azure AD join with MDM auto enrollment with Intune of Azure Windows VMs that are part for your VDI deployments.
+ - Auto MDM enrollment requires Azure AD Premium P1 licenses. Windows Server VMs don't support MDM enrollment.
> [!NOTE]
-> Once you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join it to other domain like on-premises AD or Azure AD DS. If you need to do so, you will need to disconnect the VM from your Azure AD tenant by uninstalling the extension.
+> Once you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join it to another domain like on-premises AD or Azure AD DS. If you need to do so, you will need to disconnect the VM from Azure AD by uninstalling the extension.
## Requirements
This feature is now available in the following Azure clouds:
- Azure Global - Azure Government-- Azure China
+- Azure China 21Vianet
### Network requirements
For Azure Government
- `https://login.microsoftonline.us` - For authentication flows. - `https://pasff.usgovcloudapi.net` - For Azure RBAC flows.
-For Azure China
+For Azure China 21Vianet
- `https://enterpriseregistration.partner.microsoftonline.cn` - For device registration. - `http://169.254.169.254` - Azure Instance Metadata Service endpoint. - `https://login.chinacloudapi.cn` - For authentication flows.
For Azure China
## Enabling Azure AD login for Windows VM in Azure
-To use Azure AD login for Windows VM in Azure, you need to first enable Azure AD login option for your Windows VM and then you need to configure Azure role assignments for users who are authorized to login in to the VM.
-There are multiple ways you can enable Azure AD login for your Windows VM:
+To use Azure AD login for Windows VM in Azure, you must:
+
+- First enable the Azure AD login option for your Windows VM.
+- Then configure Azure role assignments for users who are authorized to login in to the VM.
+
+There are two ways you can enable Azure AD login for your Windows VM:
-- Using the Azure portal experience when creating a Windows VM-- Using the Azure Cloud Shell experience when creating a Windows VM **or for an existing Windows VM**
+- [Using Azure portal create VM experience to enable Azure AD login](#using-azure-portal-create-vm-experience-to-enable-azure-ad-login) when creating a Windows VM.
+- [Using the Azure Cloud Shell experience to enable Azure AD login](#using-the-azure-cloud-shell-experience-to-enable-azure-ad-login) when creating a Windows VM **or for an existing Windows VM**.
### Using Azure portal create VM experience to enable Azure AD login
To create a Windows Server 2019 Datacenter VM in Azure with Azure AD logon:
1. Sign in to the [Azure portal](https://portal.azure.com), with an account that has access to create VMs, and select **+ Create a resource**. 1. Type **Windows Server** in Search the Marketplace search bar.
- 1. Click **Windows Server** and choose **Windows Server 2019 Datacenter** from Select a software plan dropdown.
- 1. Click on **Create**.
-1. On the "Management" tab, enable the option to **Login with AAD credentials** under the Azure Active Directory section from Off to **On**.
-1. Make sure **System assigned managed identity** under the Identity section is set to **On**. This action should happen automatically once you enable Login with Azure AD credentials.
-1. Go through the rest of the experience of creating a virtual machine. You will have to create an administrator username and password for the VM.
+ 1. Select **Windows Server** and choose **Windows Server 2019 Datacenter** from Select a software plan dropdown.
+ 1. Select **Create**.
+1. On the "Management" tab, check the box to **Login with Azure AD** under the Azure AD section.
+1. Make sure **System assigned managed identity** under the Identity section is checked. This action should happen automatically once you enable Login with Azure AD.
+1. Go through the rest of the experience of creating a virtual machine. You'll have to create an administrator username and password for the VM.
![Login with Azure AD credentials create a VM](./media/howto-vm-sign-in-azure-ad-windows/azure-portal-login-with-azure-ad.png)
Azure Cloud Shell is a free, interactive shell that you can use to run the steps
- Open Cloud Shell in your browser. - Select the Cloud Shell button on the menu in the upper-right corner of the [Azure portal](https://portal.azure.com).
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0.31 or later. Run az --version to find the version. If you need to install or upgrade, see the article [Install Azure CLI](/cli/azure/install-azure-cli).
+This article requires that you're running Azure CLI version 2.0.31 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article [Install Azure CLI](/cli/azure/install-azure-cli).
1. Create a resource group with [az group create](/cli/azure/group#az-group-create). 1. Create a VM with [az vm create](/cli/azure/vm#az-vm-create) using a supported distribution in a supported region.
The `provisioningState` of `Succeeded` is shown, once the extension is installed
## Configure role assignments for the VM
-Now that you have created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
+Now that you've created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
To configure role assignments for your Azure AD enabled Windows Server 2019 Data
### Using the Azure Cloud Shell experience
-The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your active Azure account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, and normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure using Azure Active Directory authentication](../../virtual-machines/linux/login-using-aad.md).
+The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your active Azure AD account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, and normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure using Azure Active Directory authentication](../../virtual-machines/linux/login-using-aad.md).
-``` AzureCLI
+```AzureCLI
$username=$(az account show --query user.name --output tsv) $vm=$(az vm show --resource-group myResourceGroup --name myVM --query id -o tsv)
az role assignment create \
``` > [!NOTE]
-> If your AAD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).
+> If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).
For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the following articles:
To log in to your Windows Server 2019 virtual machine using Azure AD:
1. Select **Connect** to launch the Windows logon dialog. 1. Logon using your Azure AD credentials.
-You are now signed in to the Windows Server 2019 Azure virtual machine with the role permissions as assigned, such as VM User or VM Administrator.
+You're now signed in to the Windows Server 2019 Azure virtual machine with the role permissions as assigned, such as VM User or VM Administrator.
> [!NOTE] > You can save the .RDP file locally on your computer to launch future remote desktop connections to your virtual machine instead of having to navigate to virtual machine overview page in the Azure portal and using the connect option. ## Using Azure Policy to ensure standards and assess compliance
-Use Azure Policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that do not have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Windows VMs that do not have Azure AD login enabled, as well as remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Windows VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+Use Azure Policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that don't have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Windows VMs that don't have Azure AD login enabled, and remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Windows VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot
The AADLoginForWindows extension must install successfully in order for the VM t
> [!NOTE] > If the extension restarts after the initial failure, the log with the deployment error will be saved as `CommandExecution_YYYYMMDDHHMMSSSSS.log`.
-"
+ 1. Open a PowerShell window on the VM and verify these queries against the Instance Metadata Service (IMDS) Endpoint running on the Azure host returns: | Command to run | Expected output |
The AADLoginForWindows extension must install successfully in order for the VM t
> [!NOTE] > Azure AD join activity is captured in Event viewer under the `User Device Registration\Admin` log.
-If AADLoginForWindows extension fails with certain error code, you can perform the following steps:
+If the AADLoginForWindows extension fails with certain error code, you can perform the following steps:
#### Issue 1: AADLoginForWindows extension fails to install with terminal error code '1007' and exit code: -2145648574.
This exit code translates to `DSREG_E_MSI_TENANTID_UNAVAILABLE` because the exte
1. Verify the Azure VM can retrieve the TenantID from the Instance Metadata Service.
- - RDP to the VM as a local administrator and verify the endpoint returns valid Tenant ID by running this command from an elevated PowerShell window on the VM:
+ - Connect to the VM as a local administrator and verify the endpoint returns a valid Tenant ID. Run the following command from an elevated PowerShell window on the VM:
- `curl -H Metadata:true http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
-1. The VM admin attempts to install the AADLoginForWindows extension, but a system assigned managed identity has not enabled the VM first. Navigate to the Identity blade of the VM. From the System assigned tab, verify Status is toggled to On.
+1. The VM admin attempts to install the AADLoginForWindows extension, but a system assigned managed identity hasn't enabled the VM first. Navigate to the Identity blade of the VM. From the System assigned tab, verify Status is toggled to On.
#### Issue 2: AADLoginForWindows extension fails to install with Exit code: -2145648607
-This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension is not able to reach the `https://enterpriseregistration.windows.net` endpoint.
+This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension isn't able to reach the `https://enterpriseregistration.windows.net` endpoint.
1. Verify the required endpoints are accessible from the VM using PowerShell:
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
Exit code 51 translates to "This extension is not supported on the VM's operating system".
-The AADLoginForWindows extension is only intended to be installed on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure the version of Windows is supported. If the build of Windows is not supported, uninstall the VM Extension.
+The AADLoginForWindows extension is only intended to be installed on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure the version of Windows is supported. If the build of Windows isn't supported, uninstall the VM Extension.
### Troubleshoot sign-in issues
Some common errors when you try to RDP with Azure AD credentials include no Azur
The Device and SSO State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES` and `SSO State` to show `AzureAdPrt : YES`.
-Also, RDP Sign-in using Azure AD accounts is captured in Event viewer under the `AAD\Operational` event logs.
+RDP Sign-in using Azure AD accounts is captured in Event viewer under the `AAD\Operational` event logs.
#### Azure role not assigned
If you see the following error message when you initiate a remote desktop connec
![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-Verify that the Windows 10 or newer PC you are using to initiate the remote desktop connection is one that is either Azure AD joined, or hybrid Azure AD joined to the same Azure AD directory where your VM is joined to. For more information about device identity, see the article [What is a device identity](./overview.md).
+The Windows 10 or newer PC you're using to initiate the remote desktop connection must be Azure AD joined, or hybrid Azure AD joined to the same Azure AD directory. For more information about device identity, see the article [What is a device identity](./overview.md).
> [!NOTE] > Windows 10 Build 20H1 added support for an Azure AD registered PC to initiate RDP connection to your VM. When using an Azure AD registered (not Azure AD joined or hybrid Azure AD joined) PC as the RDP client to initiate connections to your VM, you must enter credentials in the format `AzureAD\UPN` (for example, `AzureAD\john@contoso.com`).
-Verify that the AADLoginForWindows extension was not uninstalled after the Azure AD join finished.
+Verify that the AADLoginForWindows extension wasn't uninstalled after the Azure AD join finished.
Also, make sure that the security policy "Network security: Allow PKU2U authentication requests to this computer to use online identities" is enabled on both the server **and** the client.
If you see the following error message when you initiate a remote desktop connec
![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-Verify that the user doesn't have a temporary password. If the user has just been created, or if the user password has just been reset, the user's password is temporary and must be changed on the next sign-in. Temporary passwords cannot be used to log in to a remote desktop connection.
+Verify that the user doesn't have a temporary password. Temporary passwords can't be used to log in to a remote desktop connection.
-To resolve the issue, log in to the user account in a web browser, for instance by opening the [Azure portal](https://portal.azure.com) in a private browsing window. If you are prompted to change the password, set a new password and connect to the remote desktop connection with that new password.
+To resolve the issue, sign in with the user account in a web browser, for instance by opening the [Azure portal](https://portal.azure.com) in a private browsing window. If you're prompted to change the password, set a new password, then try connecting again.
#### MFA sign-in method required
If you see the following error message when you initiate a remote desktop connec
![The sign-in method you're trying to use isn't allowed.](./media/howto-vm-sign-in-azure-ad-windows/mfa-sign-in-method-required.png)
-If you have configured a Conditional Access policy that requires multi-factor authentication (MFA) before you can access the resource, then you need to ensure that the Windows 10 or newer PC initiating the remote desktop connection to your VM signs in using a strong authentication method such as Windows Hello. If you do not use a strong authentication method for your remote desktop connection, you will see the previous error.
+If you've configured a Conditional Access policy that requires multi-factor authentication (MFA) before you can access the resource, then you need to ensure that the Windows 10 or newer PC initiating the remote desktop connection to your VM signs in using a strong authentication method such as Windows Hello. If you don't use a strong authentication method for your remote desktop connection, you'll see the previous error.
- Your credentials did not work.
Set-MsolUser -UserPrincipalName username@contoso.com -StrongAuthenticationRequir
```
-If you have not deployed Windows Hello for Business and if that is not an option for now, you can exclude MFA requirement by configuring Conditional Access policy that excludes "**Azure Windows VM Sign-In**" app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification).
+If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can exclude MFA requirement by configuring Conditional Access policy that excludes "**Azure Windows VM Sign-In**" app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification).
> [!NOTE] > Windows Hello for Business PIN authentication with RDP has been supported by Windows 10 for several versions, however support for Biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is only available for deployments that use cert trust model and currently not available for key trust model.
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
The supported service plans include:
External admin takeover is not supported for any service that has service plans that include SharePoint, OneDrive, or Skype For Business; for example, through an Office free subscription. > [!NOTE]
-> External admin takeover is not supported cross cloud boundaries (ex. Azure Commercial to Azure Government). In these scenarios it is recommended to perform External admin takeover into another Azure Commercial tenant, and then delete the domain from this tenant so you may verify succesfully into the destination Azure Government tenant.
+> External admin takeover is not supported cross cloud boundaries (ex. Azure Commercial to Azure Government). In these scenarios it is recommended to perform External admin takeover into another Azure Commercial tenant, and then delete the domain from this tenant so you may verify successfully into the destination Azure Government tenant.
You can optionally use the [**ForceTakeover** option](#azure-ad-powershell-cmdlets-for-the-forcetakeover-option) for removing the domain name from the unmanaged organization and verifying it on the desired organization.
applied-ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-business-card.md
See how data, including name, job title, address, email, and company name, is ex
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)
-#### Sample Labeling tool
+#### Sample Labeling tool (API v2.1)
You'll need a business card document. You can use our [sample business card document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/businessCard.png).
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
See how data is extracted from your specific or unique documents by using custom
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)
-#### Sample Labeling tool
+#### Sample Labeling tool (API v2.1)
|Feature |Custom Template | Custom Neural |
applied-ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-id-document.md
See how to extract data, including name, birth date, machine-readable zone, and
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)
-#### Sample Labeling tool
+#### Sample Labeling tool (API v2.1)
You'll need an ID document. You can use our [sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/DriverLicense.png).
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
See how data, including time and date of transactions, merchant information, and
> [!div class="nextstepaction"] > [Try Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)
-#### Sample Labeling tool
+#### Sample Labeling tool (API v2.1)
You will need a receipt document. You can use our [sample receipt document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-receipt.png).
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
## Redis-benchmark utility
-**Redis-benchmark** documentation can be [found here](https://redis.io/topics/benchmarks).
+**Redis-benchmark** documentation can be [found here](https://redis.io/docs/reference/optimization/benchmarks/).
The `redis-benchmark.exe` doesn't support TLS. You'll have to [enable the Non-TLS port through the Portal](cache-configure.md#access-ports) before you run the test. A Windows-compatible version of redis-benchmark.exe can be found [here](https://github.com/MSOpenTech/redis/releases).
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
This article describes how to write and convert [Log Alert](./alerts-unified-log
## How to start writing an alert log query
-Alert queries start from [querying the log data in Log Analytics](alerts-log.md#create-a-log-alert-rule-in-the-azure-portal) that indicates the issue. You can use the [alert query examples topic](../logs/queries.md) to understand what you can discover. You may also [get started on writing your own query](../logs/log-analytics-tutorial.md).
+Alert queries start from [querying the log data in Log Analytics](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal) that indicates the issue. You can use the [alert query examples topic](../logs/queries.md) to understand what you can discover. You may also [get started on writing your own query](../logs/log-analytics-tutorial.md).
### Queries that indicate the issue and not the alert
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
# Create, view, and manage log alerts using Azure Monitor
-## Overview
- This article shows you how to create and manage log alerts. Azure Monitor log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resource logs at a set frequency and fire an alert based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). [Learn more about functionality and terminology of log alerts](./alerts-unified-log.md). Alert rules are defined by three components:
This article shows you how to create and manage log alerts. Azure Monitor log al
- Criteria: Logic to evaluate. If met, the alert fires. - Action: Notifications or automation - email, SMS, webhook, and so on. You can also [create log alert rules using Azure Resource Manager templates](../alerts/alerts-log-create-templates.md).
-## Create a log alert rule in the Azure portal
+## Create a new log alert rule in the Azure portal
> [!NOTE] > This article describes creating alert rules using the new alert rule wizard. > The new alert rule experience is a little different than the old experience. Please note these changes:
You can also [create log alert rules using Azure Resource Manager templates](../
1. In the [portal](https://portal.azure.com/), select the relevant resource. We recommend monitoring at scale by using a subscription or resource group for the alert rule. 1. In the Resource menu, select **Logs**.
-1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples topic](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md).
+1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples article](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md).
1. From the top command bar, Select **+ New Alert rule**. :::image type="content" source="media/alerts-log/alerts-create-new-alert-rule.png" alt-text="Create new alert rule.":::
You can also [create log alert rules using Azure Resource Manager templates](../
:::image type="content" source="media/alerts-log/alerts-rule-tags-tab.png" alt-text="Tags tab."::: 1. In the **Review + create** tab, a validation will run and inform you of any issues.
-1. When validation passes and you have reviewed the settings, click the **Create** button.
+1. When validation passes and you have reviewed the settings, select the **Create** button.
:::image type="content" source="media/alerts-log/alerts-rule-review-create.png" alt-text="Review and create tab.":::+
+## Enable recommended out-of-the-box alert rules in the Azure portal (preview)
+> [!NOTE]
+> The alert recommendations feature is currently in preview and is only enabled for VMs.
+
+If you don't have any alert rules defined for the selected resource, you can enable our recommended out-of-the-box alert rules.
++
+The system compiles a list of recommended alert rules based on:
+- The resource providerΓÇÖs knowledge of important signals and thresholds for monitoring the resource.
+- Telemetry that tells us what customers commonly alert on for this resource.
+
+To enable recommended alert rules:
+1. On the **Alerts** page, select **Enable recommended alert rules**. The **Enable recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
+1. In the **Alert me if** section, select all of the rules you want to enable. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like.
+1. In the **Notify me by** section, select the way you want to be notified if an alert is fired.
+1. Select **Enable**.
++ ## Manage alert rules in the Alerts portal > [!NOTE]
-> This article describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage alert rules created in the previous UI.
+> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage alert rules created in the previous UI.
1. In the [portal](https://portal.azure.com/), select the relevant resource. 1. Under **Monitoring**, select **Alerts**.
azure-monitor Alerts Managing Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-instances.md
Last updated 2/23/2022 - # Manage alert instances with unified alerts With the [unified alerts experience](./alerts-overview.md) in Azure Monitor, you can see all your different types of alerts across Azure. Unified alerts span multiple subscriptions in a single pane. This article shows how you can view your alert instances, and how to find specific alert instances for troubleshooting.
You can go to the alerts page in any of the following ways:
![Screenshot of resource Monitoring Alerts](media/alerts-managing-alert-instances/alert-resource.JPG) -- Use the context of a specific resource group. Open a resource group, go to the **Monitoring** section, and choose **Alerts**. The landing page is pre-filtered for alerts on that specific resource group.
+## The alerts page
- ![Screenshot of resource group Monitoring Alerts](media/alerts-managing-alert-instances/alert-rg.JPG)
+The **Alerts** page summarizes all your alert instances across Azure.
+### Alert Recommendations (preview)
+> [!NOTE]
+> The alert recommendations feature is currently in preview and is only enabled for VMs.
-## The alerts page
+If you don't have any alerts defined for the selected resource, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
+
+### Alerts summary pane
+If you have alerts configured for this resource, the alerts summary pane summarizes the alerts fired in the last 24 hours. You can modify the list of alert instances by selecting filters such as **time range**, **subscription**, **alert condition**, **severity**, and more. Select an alert instance.
-The **Alerts** page summarizes all your alert instances across Azure. You can modify the results by selecting filters such as **time range**, **subscription**, **alert condition**, **severity**, and more. You can select an alert instance to open the **Alert Details** page and see more details about the specific alert instance.
+To see more details about a specific alert instance, select the alerts instance to open the **Alert Details** page.
> [!NOTE] > If you navigated to the alerts page by selecting a specific alert severity, the list is pre-filtered for that severity.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
You can alert on metrics and logs, as described in [monitoring data sources](./.
- Tests for website availability ## Alerts experience ### Alerts page
+The Alerts page provides a summary of the alerts created in the last 24 hours.
+### Alert Recommendations (preview)
+> [!NOTE]
+> The alert recommendations feature is currently in preview and is only enabled for VMs.
+
+If you don't have any alerts defined for the selected resource, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
-The Alerts page provides a summary of the alerts created in the last 24 hours. You can filter the list by the subscription or any of the filter parameters at the top of the page. The page displays the total alerts for each severity. Select a severity to filter the alerts by that severity.
+### Alerts summary pane
+If you have alerts configured for this resource, the alerts summary pane summarizes the alerts fired in the last 24 hours. You can filter the list by the subscription or any of the filter parameters at the top of the page. The page displays the total alerts for each severity. Select a severity to filter the alerts by that severity.
> [!NOTE] > You can only access alerts generated in the last 30 days.
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
Title: IT Service Management Connector - Secure Export in Azure Monitor
-description: This article shows you how to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items.
+ Title: IT Service Management Connector - Secure Webhook in Azure Monitor
+description: This article shows you how to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items.
Previously updated : 2/23/2022 Last updated : 03/30/2022+
-# Connect Azure to ITSM tools by using Secure Export
+# Connect Azure to ITSM tools by using Secure Webhook
-This article shows you how to configure the connection between your IT Service Management (ITSM) product or service by using Secure Export.
+This article shows you how to configure the connection between your IT Service Management (ITSM) product or service by using Secure Webhook.
-Secure Export is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and Activity Log alerts.
+Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and Activity Log alerts.
-ITSMC uses username and password credentials. Secure Export has stronger authentication because it uses Azure Active Directory (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Azure AD with ITSM helps to identify Azure alerts (through the Azure AD application ID) that were sent to the external system.
+ITSMC uses username and password credentials. Secure Webhook has stronger authentication because it uses Azure Active Directory (Azure AD). Azure AD is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Azure AD with ITSM helps to identify Azure alerts (through the Azure AD application ID) that were sent to the external system.
-## Secure Export architecture
+## Secure Webhook architecture
-The Secure Export architecture introduces the following new capabilities:
+The Secure Webhook architecture introduces the following new capabilities:
* **New action group**: Alerts are sent to the ITSM tool through the Secure Webhook action group, instead of the ITSM action group that ITSMC uses. * **Azure AD authentication**: Authentication occurs through Azure AD instead of username/password credentials.
-## Secure Export data flow
-The steps of the Secure Export data flow are:
+## Secure Webhook data flow
+
+The steps of the Secure Webhook data flow are:
-1. Azure Monitor sends an alert that's configured to use Secure Export.
+1. Azure Monitor sends an alert that's configured to use Secure Webhook.
2. The alert payload is sent by a Secure Webhook action to the ITSM tool. 3. The ITSM application checks with Azure AD if the alert is authorized to enter the ITSM tool. 4. If the alert is authorized, the application:
The steps of the Secure Export data flow are:
![Diagram that shows how the ITSM tool communicates with Azure A D, Azure alerts, and an action group.](media/it-service-management-connector-secure-webhook-connections/secure-export-diagram.png)
-## Benefits of Secure Export
+## Benefits of Secure Webhook
The main benefits of the integration are: * **Better authentication**: Azure AD provides more secure authentication without the timeouts that commonly occur in ITSMC.
-* **Alerts resolved in the ITSM tool**: Metric alerts implement "fired" and "resolved" states. When the condition is met, the alert state is "fired." When condition is not met anymore, the alert state is "resolved." In ITSMC, alerts can't be resolved automatically. With Secure Export, the resolved state flows to the ITSM tool and so is updated automatically.
-* **[Common alert schema](./alerts-common-schema.md)**: In ITSMC, the schema of the alert payload differs based on the alert type. In Secure Export, there's a common schema for all alert types. This common schema contains the CI for all alert types. All alert types will be able to bind their CI with the CMDB.
+* **Alerts resolved in the ITSM tool**: Metric alerts implement "fired" and "resolved" states. When the condition is met, the alert state is "fired." When condition is not met anymore, the alert state is "resolved." In ITSMC, alerts can't be resolved automatically. With Secure Webhook, the resolved state flows to the ITSM tool and so is updated automatically.
+* **[Common alert schema](./alerts-common-schema.md)**: In ITSMC, the schema of the alert payload differs based on the alert type. In Secure Webhook, there's a common schema for all alert types. This common schema contains the CI for all alert types. All alert types will be able to bind their CI with the CMDB.
## Next steps
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Azure Configurations
-description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Export in Azure Monitor to centrally monitor and manage ITSM work items.
+ Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations
+description: This article shows you how to configure Azure in order to connect your ITSM products/services with Secure Webhook in Azure Monitor to centrally monitor and manage ITSM work items.
Previously updated : 2/23/2022 Last updated : 03/30/2022
-# Configure Azure to connect ITSM tools using Secure Export
+# Configure Azure to connect ITSM tools using Secure Webhook
-This article provides information about how to configure the Azure in order to use "Secure Export".
-In order to use "Secure Export", follow these steps:
+This article provides information about how to configure the Azure in order to use "Secure Webhook".
+In order to use "Secure Webhook", follow these steps:
1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory) 1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal) 1. [Create a Secure Webhook action group.](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group) 1. Configure your partner environment.
- Secure Export supports connections with the following ITSM tools:
+ Secure Webhook supports connections with the following ITSM tools:
* [ServiceNow](./itsmc-secure-webhook-connections-servicenow.md) * [BMC Helix](./itsmc-secure-webhook-connections-bmc.md)
To add a webhook to an action, follow these instructions for Secure Webhook:
The configuration contains two steps:
-1. Get the URI for the secure export definition.
+1. Get the URI for the secure Webhook definition.
2. Definitions according to the flow of the ITSM tool. ## Next steps
-* [ServiceNow Secure Export Configuration](./itsmc-secure-webhook-connections-servicenow.md)
-* [BMC Secure Export Configuration](./itsmc-secure-webhook-connections-bmc.md)
+* [ServiceNow Secure Webhook Configuration](./itsmc-secure-webhook-connections-servicenow.md)
+* [BMC Secure Webhook Configuration](./itsmc-secure-webhook-connections-bmc.md)
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md
Last updated 2/23/2022 - # Connect ITSM products/services with IT Service Management Connector This article provides information about how to configure the connection between your ITSM product/service and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. For more information about ITSMC, see [Overview](./itsmc-overview.md).
-The following ITSM products/services are supported. Select the product to view detailed information about how to connect the product to ITSMC.
--- [ServiceNow](./itsmc-connections-servicenow.md)-- [System Center Service Manager](./itsmc-connections-scsm.md)-- [Cherwell](./itsmc-connections-cherwell.md)-- [Provance](./itsmc-connections-provance.md)
+To set up your ITSM environment:
+1. Connect to your ITSM.
-> [!NOTE]
-> We propose our Cherwell and Provance customers to use [Webhook action](./action-groups.md#webhook) to Cherwell and Provance endpoint as another solution to the integration.
+ - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
+ - For SCSM, see [the System Center Service Manager connection instructions](./itsmc-connections-scsm.md).
-## IP ranges for ITSM partners connections
-In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519)
+ >[!NOTE]
+ > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
+ > Existing ITSM connections are supported.
+2. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519)
For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only. ## Next steps
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
Title: IT Service Management Connector overview description: This article provides an overview of IT Service Management Connector (ITSMC). Previously updated : 2/23/2022 Last updated : 3/30/2022
:::image type="icon" source="media/itsmc-overview/itsmc-symbol.png":::
-IT Service Management Connector (ITSMC) allows you to connect Azure to a supported IT Service Management (ITSM) product or service.
+IT Service Management Connector allows you to connect Azure Monitor to supported IT Service Management (ITSM) products or services using either ITSM actions or Secure webhook actions.
-Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service. ITSMC provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster.
+Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service. The ITSM Connector provides a bi-directional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).
-## Configuration steps
+The ITSM Connector supports connections with the following ITSM tools:
-ITSMC supports connections with the following ITSM tools:
--- ServiceNow-- System Center Service Manager-- Provance-- Cherwell-
- >[!NOTE]
-> As of 1-Oct-2020 Cherwell and Provance ITSM integrations with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
-> Existing ITSM connections will be supported.
-
-With ITSMC, you can:
--- Create work items in your ITSM tool, based on your Azure alerts (Metric Alerts, Activity Log Alerts, and Log Analytics alerts).-- Optionally, you can sync your incident and change request data from your ITSM tool to an Azure Log Analytics workspace.
+- ServiceNow ITSM or ITOM
+- System Center Service Manager (SCSM)
+- BMC
+ >[!NOTE]
+ > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
+ > Existing ITSM connections are supported.
For information about legal terms and the privacy policy, see [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).-
-You can start using ITSMC by completing the following steps:
-
-1. [Setup your ITSM Environment to accept alerts from Azure.](./itsmc-connections.md)
-1. [Configure Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
-1. [Configure Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
-1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#define-a-template)
+## ITSM Connector Workflow
+Depending on your integration, start using the ITSM Connector with these steps:
+- For Service Now ITOM events and BMC Helix use the Secure webhook action:
+ 1. [Register your app with Azure AD.](./itsm-connector-secure-webhook-connections-azure-configuration.md#register-with-azure-active-directory)
+ 1. [Define Service principal.](./itsm-connector-secure-webhook-connections-azure-configuration.md#define-service-principal)
+ 1. [Create a Secure Webhook action group.](./itsm-connector-secure-webhook-connections-azure-configuration.md#create-a-secure-webhook-action-group)
+ 1. Configure your partner environment. Secure Export supports connections with the following ITSM tools:
+ - [ServiceNow ITOM](./itsmc-secure-webhook-connections-servicenow.md)
+ - [BMC Helix](./itsmc-secure-webhook-connections-bmc.md).
+
+- For Service Now ITSM and SCSM use the ITSM action:
+
+ 1. Connect to your ITSM.
+ - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md).
+ - For SCSM, see [the System Center Service Manager connection instructions](./itsmc-connections-scsm.md).
+ 1. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.)
+ 1. [Configure Azure ITSM Solution](./itsmc-definition.md#add-it-service-management-connector)
+ 1. [Configure Azure ITSM connector for your ITSM environment.](./itsmc-definition.md#create-an-itsm-connection)
+ 1. [Configure Action Group to leverage ITSM connector.](./itsmc-definition.md#define-a-template)
## Next steps
azure-monitor Itsmc Resync Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-resync-servicenow.md
description: Reset the connection to ServiceNow so alerts in Microsoft Azure can
Previously updated : 2/23/2022 Last updated : 03/30/2022 # How to manually fix sync problems
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with BMC
-description: This article shows you how to connect your ITSM products/services with BMC on Secure Export in Azure Monitor.
+ Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Configuration with BMC
+description: This article shows you how to connect your ITSM products/services with BMC on Secure Webhook in Azure Monitor.
Previously updated : 2/23/2022 Last updated : 03/30/2022 # Connect BMC Helix to Azure Monitor
-The following sections provide details about how to connect your BMC Helix product and Secure Export in Azure.
+The following sections provide details about how to connect your BMC Helix product and Secure Webhook in Azure.
## Prerequisites
Ensure that you've met the following prerequisites:
## Configure the BMC Helix connection
-1. Use the following procedure in the BMC Helix environment in order to get the URI for the secure export:
+1. Use the following procedure in the BMC Helix environment in order to get the URI for the secure Webhook:
1. Log in to Integration Studio. 1. Search for the **Create Incident from Azure Alerts** flow.
azure-monitor Itsmc Secure Webhook Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md
Title: IT Service Management Connector - Secure Export in Azure Monitor - Configuration with ServiceNow
-description: This article shows you how to connect your ITSM products/services with ServiceNow on Secure Export in Azure Monitor.
+ Title: IT Service Management Connector - Secure Webhook in Azure Monitor - Configuration with ServiceNow
+description: This article shows you how to connect your ITSM products/services with ServiceNow on Secure Webhook in Azure Monitor.
Previously updated : 2/23/2022 Last updated : 03/30/2022 # Connect ServiceNow to Azure Monitor
-The following sections provide details about how to connect your ServiceNow product and Secure Export in Azure.
+The following sections provide details about how to connect your ServiceNow product and Secure Webhook in Azure.
## Prerequisites
Ensure that you've met the following prerequisites:
## Configure the ServiceNow connection
-1. Use the link https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor the URI for the secure export definition.
+1. Use the link https://(instance name).service-now.com/api/sn_em_connector/em/inbound_event?source=azuremonitor the URI for the secure Webhook definition.
2. Follow the instructions according to the version: * [Rome](https://docs.servicenow.com/bundle/rome-it-operations-management/page/product/event-management/concept/azure-integration.html)
azure-monitor Itsmc Synced Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-synced-data.md
# Data synced from your ITSM product Incidents and change requests are synced from your ITSM tool to your Log Analytics workspace, based on the connection's configuration (using "Sync Data" field):
-* [ServiceNow](./itsmc-connections-servicenow.md)
-* [System Center Service Manager](./itsmc-connections-scsm.md)
-* [Cherwell](./itsmc-connections-cherwell.md)
-* [Provance](./itsmc-connections-provance.md)
-
+ - [ServiceNow](./itsmc-connections-servicenow.md)
+ - [System Center Service Manager](./itsmc-connections-scsm.md)
+ >[!NOTE]
+ > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported.
+ > Existing ITSM connections are supported.
## Synced data This section shows some examples of data gathered by ITSMC.
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
- A functioning ASP.NET Core application. If you need to create an ASP.NET Core application, follow this [ASP.NET Core tutorial](/aspnet/core/getting-started/). - A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=net) are recommended over instrumentation keys. New Azure regions require using connection strings instead of instrumentation keys. Connection strings identify the appropriate endpoint for your Application Insights resource which provides the fastest way to ingest your telemetry for alerting and reporting. You will need to copy the connection string and add it to your application's code or to an environment variable.
- ## Enable Application Insights server-side telemetry (Visual Studio) For Visual Studio for Mac, use the [manual guidance](#enable-application-insights-server-side-telemetry-no-visual-studio). Only the Windows version of Visual Studio supports this procedure.
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
When your app has been created, a new pane opens. This pane is where you see per
The instrumentation key identifies the resource that you want to associate your telemetry data with. You will need to copy the instrumentation key and add it to your application's code.
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
- ## Install the SDK in your app Install the Application Insights SDK in your app. This step depends heavily on the type of your application.
azure-monitor Custom Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-endpoints.md
To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article. These changes require adjusting the sample code and replacing the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required.
-> [!NOTE]
-> [Connection strings](./sdk-connection-string.md?tabs=net) are the new preferred method of setting custom endpoints within Application Insights.
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights is an extensible analytics service for web developers that
## Get an Application Insights instrumentation key
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=java) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. In the Azure portal, create an Application Insights resource. Set the application type to Java web application.
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
When the SDK **does** throttle the telemetry through sampling the `itemCount` is
One caveat for this example is that App Insights SDK samples based on operation ID, meaning that an `operation_Id` is selected and **all** of the telemetry for that single operation are ingested and saved (not random individual records). This can also result in fluctuations based on application operation telemetry counts. If one operation has a higher amount of records and that operation is sampled it would show up as a spike in adjusted sample rates. For example if one operation produces 4000 telemetry records and the other operations only produce 1 to 3 telemetry records. The sampling based on `operation_Id` is done to enable an end-to-end view for failing operations. All telemetry for an operation can be reviewed, including exception details, to precisely diagnose application code errors.
+> [!WARNING]
+> A distributed operation's end-to-end view integrity may be impacted if any application in the distributed operation has turned on sampling. Different sampling decisions are made by each application in a distributed operation, so telemetry for one Operation ID may be saved by one application while other applications may decide to not sample the telemetry for that same Operation ID.
+ As sampling rates increase log based queries accuracy decrease and are usually inflated. This only impacts the accuracy of log-based queries when sampling is enabled and the sample rates are in a higher range (~ 60%). The impact varies based on telemetry types, telemetry counts per operation as well as other factors. To address the problems introduced by sampling pre-aggregated metrics are used in the SDKs. Additional details about these metrics, log-based and pre-aggregated, can be referenced in [Azure Application Insights - Azure Monitor | Microsoft Docs](./pre-aggregated-metrics-log-metrics.md#sdk-supported-pre-aggregated-metrics-table). Relevant properties of the logged data are identified and statistics extracted before sampling occurs. To avoid resource and cost issues, metrics are aggregated. The resulting aggregate data is represented by only a few metric telemetry items per minute, instead of potentially thousands of event telemetry items. These metrics calculate the 25 requests from the example and send a metric to the MDM account reporting ΓÇ£this web app processed 25 requestsΓÇ¥, but the sent request telemetry record will have an `itemCount` of 100. These pre-aggregated metrics report the correct numbers and can be relied upon when sampling affects the log-based queries results. They can be viewed on the Metrics blade of the Application Insights portal.
azure-monitor Prefer Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/prefer-options.md
The API supports setting some request options using the `Prefer` header. This se
## Visualization information
-In the query language, you can specify different [render options](https://docs.loganalytics.io/docs/Language-Reference/Tabular-operators/render-operator). By default, the API does not return information about the type of visualization. To include a specific visualization, include this header:
+In the query language, you can specify different render options. By default, the API does not return information about the type of visualization. To include a specific visualization, include this header:
``` Prefer: include-render=true
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 03/29/2022 Last updated : 04/02/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't start or end with hyphen. CanΓÇÖt have hyphen twice in both third and fourth place.<br><br> Can't have any special characters, such as `@`. |
-> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. CanΓÇÖt have hyphen twice in both third and fourth place. |
+> | managedInstances | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
+> | servers | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
> | servers / administrators | server | | Must be `ActiveDirectory`. | > | servers / databases | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. | > | servers / databases / syncGroups | database | 1-150 | Alphanumerics, hyphens, and underscores. | > | servers / elasticPools | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. |
-> | servers / failoverGroups | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. |
+> | servers | failoverGroups | 1-63 | Lowercase letters, numbers, and hyphens.<br><br> Can't have any special characters, such as `@`.<br><br> Can't start or end with hyphen.<br><br> CanΓÇÖt have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
> | servers / firewallRules | server | 1-128 | Can't use:<br>`<>*%&:;\/?` or control characters<br><br>Can't end with period. | > | servers / keys | server | | Must be in format:<br>`VaultName_KeyName_KeyVersion`. |
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Previously updated : 03/22/2022 Last updated : 04/02/2022 # Prepare your environment for a link - Azure SQL Managed Instance
The following table describes port actions for each environment:
|Environment|What to do| |:|:--|
-|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of SQL Managed Instance. If necessary, do the same on the Windows firewall. Create a network security group (NSG) rule in the virtual network that hosts the VM to allow communication on port 5022. |
-|SQL Server (outside Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of SQL Managed Instance. If necessary, do the same on the Windows firewall. |
+|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet IP range of SQL Managed Instance. If necessary, do the same on the SQL Server host OS (Windows/Linux) firewall. Create a network security group (NSG) rule in the virtual network that hosts the VM to allow communication on port 5022. |
+|SQL Server (outside Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet IP range of SQL Managed Instance. If necessary, do the same on the SQL Server host OS (Windows/Linux) firewall. |
|SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of SQL Server on port 5022 to the virtual network that hosts SQL Managed Instance. | Use the following PowerShell script on the Windows host of the SQL Server instance to open ports in the Windows firewall:
A successful test shows `TcpTestSucceeded : True`.
:::image type="content" source="./media/managed-instance-link-preparation/powershell-output-tnc-command.png" alt-text="Screenshot that shows the output of the command for testing a network connection in PowerShell."::: If the response is unsuccessful, verify the following network settings:-- There are rules in both the network firewall *and* the Windows firewall that allow traffic to the *subnet* of SQL Managed Instance.
+- There are rules in both the network firewall *and* the SQL Server host OS (Windows/Linux) firewall that allow traffic to the entire *subnet IP range* of SQL Managed Instance.
- There's an NSG rule that allows communication on port 5022 for the virtual network that hosts SQL Managed Instance.
cognitive-services How To Configure Rhel Centos 7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-rhel-centos-7.md
Title: How to configure RHEL/CentOS for C++ - Speech service
+ Title: How to configure RHEL/CentOS 7 - Speech service
description: Learn how to configure RHEL/CentOS 7 so that the Speech SDK can be used.
Previously updated : 04/02/2020 Last updated : 04/01/2022
-# Configure RHEL7/CentOS7 for C++
+# Configure RHEL/CentOS 7
-To use the Speech SDK for C++ development on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler and the shared C++ runtime library on your system.
+To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system.
## Install dependencies
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
See the sections in this article for the phonemes that are specific to each loca
## zh-CN [!INCLUDE [zh-CN](./includes/phonetic-sets/text-to-speech/zh-cn.md)]
+## zh-HK
+ ## zh-TW [!INCLUDE [zh-TW](./includes/phonetic-sets/text-to-speech/zh-tw.md)]
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
The final step would be to deploy the bot logic to the Web App we created. As we
## Step 2 - Get an Azure Communication Services Resource Now that you got the bot part sorted out, we'll need to get an Azure Communication Services resource, which we would use for configuring the Azure Communication Services channel.
-1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md). You'll need to **record your resource endpoint and key** for this quickstart.
+1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md).
2. Create a Azure Communication Services User and issue a user access token [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**. ## Step 3 - Enable Azure Communication Services Chat Channel
With the Azure Communication Services resource, we can configure the Azure Commu
:::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="DemoApp Launch Acs Chat" lightbox="./media/demoapp-launch-acs-chat.png":::
-2. Provide the resource endpoint and the key belonging to the Azure Communication Services resource that you want to connect with.
+2. Choose from the dropdown list the Azure Communication Services resource that you want to connect with.
:::image type="content" source="./media/smaller-demoapp-connect-acsresource.png" alt-text="DemoApp Connect Acs Resource" lightbox="./media/demoapp-connect-acsresource.png":::
And on the Azure Communication Services User side, the Azure Communication Servi
## Next steps
-Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
+Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
Previously updated : 03/24/2022 Last updated : 04/01/2022
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK will not retry on writes for transient failures as writes are not idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) | |<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you are not aware of the number of partitions, start by using `int.MaxValue` which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-dotnet-sdk-v3-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. | | <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths (IndexingPolicy.IncludedPaths and IndexingPolicy.ExcludedPaths). Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit](performance-tips-dotnet-sdk-v3-sql.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
cosmos-db Best Practice Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-java.md
Previously updated : 03/28/2022 Last updated : 04/01/2022
This article walks through the best practices for using the Azure Cosmos DB Java
| <input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use [project reactor's timeout API](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#timeout-java.time.Duration-). For more details on timeouts with Cosmos DB [visit here](troubleshoot-request-timeout-java-sdk-v4-sql.md) | | <input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit here](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) | | <input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `CosmosAsyncDatabase#read()` or `CosmosAsyncContainer#read()` will result in metadata calls to the service, which consume from the system-reserved RU limit. `createDatabaseIfNotExists()` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. |
-| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-java-sdk-v4-sql.md#sdk-usage) for better latency and throughput on your queries. We recommend setting the `maxDegreeOfParallelism` property within the `CosmosQueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, set the value to `-1` that will give you the best latency. Also, set the `maxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
+| <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-java) for better latency and throughput on your queries. We recommend setting the `maxDegreeOfParallelism` property within the `CosmosQueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, set the value to `-1` that will give you the best latency. Also, set the `maxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
| <input type="checkbox"/> | Performance Testing Backoffs | When performing testing on your application, you should implement backoffs at [`RetryAfter`](performance-tips-java-sdk-v4-sql.md#sdk-usage) intervals. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries. | | <input type="checkbox"/> | Indexing | The Azure Cosmos DB indexing policy also allows you to specify which document paths to include or exclude from indexing by using indexing paths `IndexingPolicy#getIncludedPaths()` and `IndexingPolicy#getExcludedPaths()`. Ensure that you exclude unused paths from indexing for faster writes. For a sample on how to create indexes using the SDK [visit here](performance-tips-java-sdk-v4-sql.md#indexing-policy) | | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. |
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
Previously updated : 02/18/2022 Last updated : 03/31/2022 ms.devlang: csharp
If you're trying to improve your database performance, consider the options pres
## Hosting recommendations
-**For query-intensive workloads, use Windows 64-bit instead of Linux or Windows 32-bit host processing**
-
-We recommend Windows 64-bit host processing for improved performance. The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the Windows x64 platform.
-
-For Linux and other unsupported platforms where ServiceInterop.dll isn't available, an additional network call is made to the gateway to get the optimized query.
-
-The four application types listed here use 32-bit host processing by default. To change host processing to 64-bit processing for your application type, do the following:
--- **For executable applications**: In the **Project Properties** window, on the **Build** pane, set the [platform target](/visualstudio/ide/how-to-configure-projects-to-target-platforms?preserve-view=true&view=vs-2019) to **x64**.--- **For VSTest-based test projects**: On the Visual Studio **Test** menu, select **Test** > **Test Settings**, and then set **Default Processor Architecture** to **X64**.--- **For locally deployed ASP.NET web applications**: Select **Tools** > **Options** > **Projects and Solutions** > **Web Projects**, and then select **Use the 64-bit version of IIS Express for web sites and projects**.--- **For ASP.NET web applications deployed on Azure**: In the Azure portal, in **Application settings**, select the **64-bit** platform.-
-> [!NOTE]
-> By default, new Visual Studio projects are set to **Any CPU**. We recommend that you set your project to **x64** so it doesn't switch to **x86**. A project that's set to **Any CPU** can easily switch to **x86** if an x86-only dependency is added.<br/>
-> The ServiceInterop.dll file needs to be in the folder that the SDK DLL is being executed from. This should be a concern only if you manually copy DLLs or have custom build or deployment systems.
-
**Turn on server-side garbage collection** Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/core/run-time-config/garbage-collector#flavors-of-garbage-collection) to `true`.
Enable *Bulk* for scenarios where the workload requires a large amount of throug
Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [`Documents.Client.ConnectionPolicy.MaxConnectionLimit`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.gatewaymodemaxconnectionlimit) to a higher value.
-**Tune parallel queries for partitioned collections**
-
-SQL .NET SDK supports parallel queries, which enable you to query a partitioned container in parallel. For more information, see [code samples](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs) related to working with the SDKs. Parallel queries are designed to provide better query latency and throughput than their serial counterpart.
-
-Parallel queries provide two parameters that you can tune to fit your requirements:
--- **MaxConcurrency**: Controls the maximum number of partitions that can be queried in parallel.-
- Parallel query works by querying multiple partitions in parallel. But data from an individual partition is fetched serially with respect to the query. Setting `MaxConcurrency` in [SDK V3](https://github.com/Azure/azure-cosmos-dotnet-v3) to the number of partitions has the best chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can set the degree of parallelism to a high number. The system will choose the minimum (number of partitions, user provided input) as the degree of parallelism.
-
- Parallel queries produce the most benefit if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned so that all or most of the data returned by a query is concentrated in a few partitions (one partition is the worst case), those partitions will bottleneck the performance of the query.
-
-- **MaxBufferedItemCount**: Controls the number of pre-fetched results.-
- Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The `MaxBufferedItemCount` parameter limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching.
-
- Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
-
-**Implement backoff at RetryAfter intervals**
-
-During performance testing, you should increase load until a small rate of requests are throttled. If requests are throttled, the client application should back off throttling for the server-specified retry interval. Respecting the backoff helps ensure that you'll spend a minimal amount of time waiting between retries.
-
-For more information, see [RetryAfter](/dotnet/api/microsoft.azure.cosmos.cosmosexception.retryafter#Microsoft_Azure_Cosmos_CosmosException_RetryAfter).
-
-There's a mechanism for logging additional diagnostics information and troubleshooting latency issues, as shown in the following sample. You can log the diagnostics string for requests that have a higher read latency. The captured diagnostics string will help you understand how many times you received a *429* error for a given request.
-
-```csharp
-ItemResponse<Book> readItemResponse = await this.cosmosContainer.ReadItemAsync<Book>("ItemId", new PartitionKey("PartitionKeyValue"));
-readItemResponse.Diagnostics.ToString();
-```
- **Increase the number of threads/tasks** See [Increase the number of threads/tasks](#increase-threads) in the Networking section of this article.
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v3&pivots=programming-language-csharp).
+ ## <a id="indexing-policy"></a> Indexing policy **Exclude unused paths from indexing for faster writes**
cosmos-db Performance Tips Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-java-sdk-v4-sql.md
ms.devlang: java Previously updated : 08/26/2021 Last updated : 04/01/2021
As a first step, use the following recommended configuration settings below. The
| idleEndpointTimeout | "PT1H" | | maxRequestsPerConnection | "30" |
-* **Tuning parallel queries for partitioned collections**
-
-Azure Cosmos DB Java SDK v4 supports parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples) related to working with Azure Cosmos DB Java SDK v4. Parallel queries are designed to improve query latency and throughput over their serial counterpart.
-
-* ***Tuning setMaxDegreeOfParallelism\:***
-
-Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned collection is fetched serially with respect to the query. So, use setMaxDegreeOfParallelism to set the number of partitions that has the maximum chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can use setMaxDegreeOfParallelism to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism.
-
-It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be bottlenecked by those partitions.
-
-* ***Tuning setMaxBufferedItemCount\:***
-
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. setMaxBufferedItemCount limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching.
-
-Pre-fetching works the same way irrespective of the MaxDegreeOfParallelism, and there is a single buffer for the data from all partitions.
- * **Scale out your client-workload** If you are testing at high throughput levels, the client application may become the bottleneck due to the machine capping out on CPU or network utilization. If you reach this point, you can continue to push the Azure Cosmos DB account further by scaling out your client applications across multiple servers.
A good rule of thumb is not to exceed >50% CPU utilization on any given server,
<a id="tune-page-size"></a>
-* **Tune the page size for queries/read feeds for better performance**
-
-When performing a bulk read of documents by using read feed functionality (for example, *readItems*) or when issuing a SQL query (*queryItems*), the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
-
-Suppose that your application issues a query to Azure Cosmos DB, and suppose that your application requires the full set of query results in order to complete its task. To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size by adjusting the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) request header field.
-
-* For single-partition queries, adjusting the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) field value to -1 (no limit on page size) maximizes latency by minimizing the number of query response pages: either the full result set will return in a single page, or if the query takes longer than the timeout interval, then the full result set will be returned in the minimum number of pages possible. This saves on multiples of the request round-trip time.
-
-* For cross-partition queries, setting the [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) field to -1 and removing the page size limit risks overwhelming the SDK with unmanageable page sizes. In the cross-partition case we recommend raising the page size limit up to some large but finite value, perhaps 1000, to reduce latency.
-
-In some applications, you may not require the full set of query results. In cases where you need to display only a few results, for example, if your user interface or application API returns only 10 results at a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
-
-You may also set the preferred page size argument of the *byPage* method, rather than modifying the REST header field directly. Keep in mind that [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) or the preferred page size argument of *byPage* are only setting an upper limit on page size, not an absolute requirement; so for a variety of reason you may see Azure Cosmos DB return pages which are smaller than your preferred page size.
- * **Use Appropriate Scheduler (Avoid stealing Event loop IO Netty threads)** The asynchronous functionality of Azure Cosmos DB Java SDK is based on [netty](https://netty.io/) non-blocking IO. The SDK uses a fixed number of IO netty event loop threads (as many CPU cores your machine has) for executing IO operations. The Flux returned by API emits the result on one of the shared IO event loop netty threads. So it is important to not block the shared IO event loop netty threads. Doing CPU intensive work or blocking operation on the IO event loop netty thread may cause deadlock or significantly reduce SDK throughput.
Java SDK V4 (Maven com.azure::azure-cosmos) Sync API
The latter is supported but will add latency to your application; the SDK must parse the item and extract the partition key.
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?pivots=programming-language-java).
+ ## Indexing policy * **Exclude unused paths from indexing for faster writes**
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-query-sdk.md
+
+ Title: Azure Cosmos DB performance tips for queries using the Azure Cosmos DB SDK
+description: Learn query configuration options to help improve performance using the Azure Cosmos DB SDK.
++++ Last updated : 04/01/2022+
+ms.devlang: csharp, java
+
+zone_pivot_groups: programming-languages-set-cosmos
++
+# Query performance tips for Azure Cosmos DB SDKs
+
+Azure Cosmos DB is a fast, flexible distributed database that scales seamlessly with guaranteed latency and throughput levels. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call. To learn more, see [provision container throughput](how-to-provision-container-throughput.md) or [provision database throughput](how-to-provision-database-throughput.md).
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There are two ways to remove this request and reduce the latency of the query operation:
+
+### Use local Query Plan generation
+
+The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the **Windows x64** platform. The following types of applications use 32-bit host processing by default. To change host processing to 64-bit processing, follow these steps, based on the type of your application:
+
+- For executable applications, you can change host processing by setting the [platform target](/visualstudio/ide/how-to-configure-projects-to-target-platforms?preserve-view=true) to **x64** in the **Project Properties** window, on the **Build** tab.
+
+- For VSTest-based test projects, you can change host processing by selecting **Test** > **Test Settings** > **Default Processor Architecture as X64** on the Visual Studio **Test** menu.
+
+- For locally deployed ASP.NET web applications, you can change host processing by selecting **Use the 64-bit version of IIS Express for web sites and projects** under **Tools** > **Options** > **Projects and Solutions** > **Web Projects**.
+
+- For ASP.NET web applications deployed on Azure, you can change host processing by selecting the **64-bit** platform in **Application settings** in the Azure portal.
+
+> [!NOTE]
+> By default, new Visual Studio projects are set to **Any CPU**. We recommend that you set your project to **x64** so it doesn't switch to **x86**. A project set to **Any CPU** can easily switch to **x86** if an x86-only dependency is added.<br/>
+> ServiceInterop.dll needs to be in the folder that the SDK DLL is being executed from. This should be a concern only if you manually copy DLLs or have custom build/deployment systems.
+
+### Use single partition queries
+
+# [V3 .NET SDK](#tab/v3)
+
+For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.partitionkey) property in `QueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By):
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+For queries that target a Partition Key by setting the [PartitionKey](/dotnet/api/microsoft.azure.documents.client.feedoptions.partitionkey) property in `FeedOptions` and contain no aggregations (including Distinct, DCount, Group By):
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PartitionKey = new PartitionKey("Washington")
+ }).AsDocumentQuery();
+```
+++
+> [!NOTE]
+> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slowed they can potentially be.
+
+### Avoid recreating the iterator unnecessarily
+
+When all the query results are consumed by the current component, you don't need to re-create the iterator with the continuation for every page. Always prefer to drain the query fully unless the pagination is controlled by another calling component:
+
+# [V3 .NET SDK](#tab/v3)
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() { PartitionKey = new PartitionKey("Washington")}))
+{
+ while (feedIterator.HasMoreResults)
+ {
+ foreach(MyItem document in await feedIterator.ReadNextAsync())
+ {
+ // Iterate through documents
+ }
+ }
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ PartitionKey = new PartitionKey("Washington")
+ }).AsDocumentQuery();
+while (query.HasMoreResults)
+{
+ foreach(Document document in await queryable.ExecuteNextAsync())
+ {
+ // Iterate through documents
+ }
+}
+```
+++
+## Tune the degree of parallelism
+
+# [V3 .NET SDK](#tab/v3)
+
+For queries, tune the [MaxConcurrency](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxconcurrency) property in `QueryRequestOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxConcurrency` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxConcurrency = -1 }))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+For queries, tune the [MaxDegreeOfParallelism](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxdegreeofparallelism) property in `FeedOptions` to identify the best configurations for your application, especially if you perform cross-partition queries (without a filter on the partition-key value). `MaxDegreeOfParallelism` controls the maximum number of parallel tasks, that is, the maximum of partitions to be visited in parallel. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+```cs
+IDocumentQuery<dynamic> query = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ new FeedOptions
+ {
+ MaxDegreeOfParallelism = -1,
+ EnableCrossPartitionQuery = true
+ }).AsDocumentQuery();
+```
+++
+Let's assume that
+* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
+* P = User-specified maximum number of parallel tasks
+* N = Number of partitions that needs to be visited for answering a query
+
+Following are implications of how the parallel queries would behave for different values of P.
+* (P == 0) => Serial Mode
+* (P == 1) => Maximum of one task
+* (P > 1) => Min (P, N) parallel tasks
+* (P < 1) => Min (N, D) parallel tasks
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
+
+> [!NOTE]
+> The `MaxItemCount` property shouldn't be used just for pagination. Its main use is to improve the performance of queries by reducing the maximum number of items returned in a single page.
+
+# [V3 .NET SDK](#tab/v3)
+
+You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxitemcount) property in `QueryRequestOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxItemCount = 1000}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxitemcount) property in `FeedOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `MaxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
+
+```csharp
+IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
+ new FeedOptions { MaxItemCount = 1000 });
+```
+++
+When a query is executed, the resulting data is sent within a TCP packet. If you specify too low a value for `MaxItemCount`, the number of trips required to send the data within the TCP packet is high, which affects performance. So if you're not sure what value to set for the `MaxItemCount` property, it's best to set it to -1 and let the SDK choose the default value.
+
+## Tune the buffer size
+
+# [V3 .NET SDK](#tab/v3)
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.cosmos.queryrequestoptions.maxbuffereditemcount) property in `QueryRequestOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
+
+```cs
+using (FeedIterator<MyItem> feedIterator = container.GetItemQueryIterator<MyItem>(
+ "SELECT * FROM c WHERE c.city = 'Seattle'",
+ requestOptions: new QueryRequestOptions() {
+ PartitionKey = new PartitionKey("Washington"),
+ MaxBufferedItemCount = -1}))
+{
+ // ...
+}
+```
+
+# [V2 .NET SDK](#tab/v2)
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The [MaxBufferedItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxbuffereditemcount) property in `FeedOptions` limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching. If you set this value to -1, the system will automatically determine the number of items to buffer.
+
+```csharp
+IQueryable<dynamic> authorResults = client.CreateDocumentQuery(
+ UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName),
+ "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'",
+ new FeedOptions { MaxBufferedItemCount = -1 });
+```
+++
+Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
+++
+## Next steps
+
+To learn more about performance using the .NET SDK:
+
+* [Best practices for Azure Cosmos DB .NET SDK](best-practice-dotnet.md)
+* [Performance tips for Azure Cosmos DB .NET V3 SDK](performance-tips-dotnet-sdk-v3-sql.md)
+* [Performance tips for Azure Cosmos DB .NET V2 SDK](performance-tips.md)
++
+## Reduce Query Plan calls
+
+To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation.
+
+### Use Query Plan caching
+
+The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](sql-query-parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Data Cosmos SDK version 3.13.0 and above**.
+
+### Use parametrized single partition queries
+
+For parametrized queries that are scoped to a partition key with [setPartitionKey](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setpartitionkey) in `CosmosQueryRequestOptions` and contain no aggregations (including Distinct, DCount, Group By), the query plan can be avoided:
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+
+ArrayList<SqlParameter> paramList = new ArrayList<SqlParameter>();
+paramList.add(new SqlParameter("@city", "Seattle"));
+SqlQuerySpec querySpec = new SqlQuerySpec(
+ "SELECT * FROM c WHERE c.city = @city",
+ paramList);
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+> [!NOTE]
+> Cross-partition queries require the SDK to visit all existing partitions to check for results. The more [physical partitions](../partitioning-overview.md#physical-partitions) the container has, the slowed they can potentially be.
+
+## Tune the degree of parallelism
+
+Parallel queries work by querying multiple partitions in parallel. However, data from an individual partitioned container is fetched serially with respect to the query. So, use [setMaxDegreeOfParallelism](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxdegreeofparallelism) on `CosmosQueryRequestOptions` to set the value to the number of partitions you have. If you don't know the number of partitions, you can use `setMaxDegreeOfParallelism` to set a high number, and the system chooses the minimum (number of partitions, user provided input) as the maximum degree of parallelism. Setting the value to -1 will let the SDK decide the optimal concurrency.
+
+It is important to note that parallel queries produce the best benefits if the data is evenly distributed across all partitions with respect to the query. If the partitioned container is partitioned such a way that all or a majority of the data returned by a query is concentrated in a few partitions (one partition in worst case), then the performance of the query would be degraded.
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+options.setMaxDegreeOfParallelism(-1);
+
+// Define the query
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+Let's assume that
+* D = Default Maximum number of parallel tasks (= total number of processors in the client machine)
+* P = User-specified maximum number of parallel tasks
+* N = Number of partitions that needs to be visited for answering a query
+
+Following are implications of how the parallel queries would behave for different values of P.
+* (P == 0) => Serial Mode
+* (P == 1) => Maximum of one task
+* (P > 1) => Min (P, N) parallel tasks
+* (P == -1) => Min (N, D) parallel tasks
+
+## Tune the page size
+
+When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 4 MB, whichever limit is hit first.
+
+You can use the `pageSize` parameter in `iterableByPage()` for sync API and `byPage()` for async API, to define a page size:
+
+```java
+// Sync API
+Iterable<FeedResponse<MyItem>> filteredItemsAsPages =
+ container.queryItems(querySpec, options, MyItem.class).iterableByPage(continuationToken,pageSize);
+
+for (FeedResponse<MyItem> page : filteredItemsAsPages) {
+ for (MyItem item : page.getResults()) {
+ //...
+ }
+}
+
+// Async API
+Flux<FeedResponse<MyItem>> filteredItemsAsPages =
+ asyncContainer.queryItems(querySpec, options, MyItem.class).byPage(continuationToken,pageSize);
+
+filteredItemsAsPages.map(page -> {
+ for (MyItem item : page.getResults()) {
+ //...
+ }
+}).subscribe();
+```
+
+## Tune the buffer size
+
+Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. The pre-fetching helps in overall latency improvement of a query. [setMaxBufferedItemCount](/java/api/com.azure.cosmos.models.cosmosqueryrequestoptions.setmaxbuffereditemcount) in `CosmosQueryRequestOptions` limits the number of pre-fetched results. Setting setMaxBufferedItemCount to the expected number of results returned (or a higher number) enables the query to receive maximum benefit from pre-fetching (NOTE: This can also result in high memory consumption). If you set this value to 0, the system will automatically determine the number of items to buffer.
+
+```java
+CosmosQueryRequestOptions options = new CosmosQueryRequestOptions();
+options.setPartitionKey(new PartitionKey("Washington"));
+options.setMaxBufferedItemCount(-1);
+
+// Define the query
+
+// Sync API
+CosmosPagedIterable<MyItem> filteredItems =
+ container.queryItems(querySpec, options, MyItem.class);
+
+// Async API
+CosmosPagedFlux<MyItem> filteredItems =
+ asyncContainer.queryItems(querySpec, options, MyItem.class);
+```
+
+Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
+
+## Next steps
+
+To learn more about performance using the Java SDK:
+
+* [Best practices for Azure Cosmos DB Java V4 SDK](best-practice-java.md)
+* [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4-sql.md)
+
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips.md
The [.NET v3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3) is released.
## <a id="hosting"></a> Hosting recommendations
-**For query-intensive workloads, use Windows 64-bit instead of Linux or Windows 32-bit host processing**
-
-We recommend Windows 64-bit host processing for improved performance. The SQL SDK includes a native ServiceInterop.dll to parse and optimize queries locally. ServiceInterop.dll is supported only on the Windows x64 platform. For Linux and other unsupported platforms where ServiceInterop.dll isn't available, an additional network call is made to the gateway to get the optimized query. The following types of applications use 32-bit host processing by default. To change host processing to 64-bit processing, follow these steps, based on the type of your application:
--- For executable applications, you can change host processing by setting the [platform target](/visualstudio/ide/how-to-configure-projects-to-target-platforms?preserve-view=true&view=vs-2019) to **x64** in the **Project Properties** window, on the **Build** tab.--- For VSTest-based test projects, you can change host processing by selecting **Test** > **Test Settings** > **Default Processor Architecture as X64** on the Visual Studio **Test** menu.--- For locally deployed ASP.NET web applications, you can change host processing by selecting **Use the 64-bit version of IIS Express for web sites and projects** under **Tools** > **Options** > **Projects and Solutions** > **Web Projects**.--- For ASP.NET web applications deployed on Azure, you can change host processing by selecting the **64-bit** platform in **Application settings** in the Azure portal.-
-> [!NOTE]
-> By default, new Visual Studio projects are set to **Any CPU**. We recommend that you set your project to **x64** so it doesn't switch to **x86**. A project set to **Any CPU** can easily switch to **x86** if an x86-only dependency is added.<br/>
-> ServiceInterop.dll needs to be in the folder that the SDK DLL is being executed from. This should be a concern only if you manually copy DLLs or have custom build/deployment systems.
-
**Turn on server-side garbage collection (GC)** Reducing the frequency of garbage collection can help in some cases. In .NET, set [gcServer](/dotnet/framework/configure-apps/file-schema/runtime/gcserver-element) to `true`.
A profiler, such as [PerfView](https://github.com/Microsoft/perfview), can be us
Azure Cosmos DB requests are made over HTTPS/REST when you use gateway mode. They're subjected to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (100 to 1,000) so the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for [ServicePointManager.DefaultConnectionLimit](/dotnet/api/system.net.servicepointmanager.defaultconnectionlimit) is 50. To change the value, you can set [Documents.Client.ConnectionPolicy.MaxConnectionLimit](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.maxconnectionlimit) to a higher value.
-**Tune parallel queries for partitioned collections**
-
-SQL .NET SDK 1.9.0 and later support parallel queries, which enable you to query a partitioned collection in parallel. For more information, see [code samples](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/samples/code-samples/Queries/Program.cs) related to working with the SDKs. Parallel queries are designed to provide better query latency and throughput than their serial counterpart. Parallel queries provide two parameters that you can tune to fit your requirements:
-- `MaxDegreeOfParallelism` controls the maximum number of partitions that can be queried in parallel. -- `MaxBufferedItemCount` controls the number of pre-fetched results.-
-***Tuning degree of parallelism***
-
-Parallel query works by querying multiple partitions in parallel. But data from an individual partition is fetched serially with respect to the query. Setting `MaxDegreeOfParallelism` in [SDK V2](sql-api-sdk-dotnet.md) to the number of partitions has the best chance of achieving the most performant query, provided all other system conditions remain the same. If you don't know the number of partitions, you can set the degree of parallelism to a high number. The system will choose the minimum (number of partitions, user provided input) as the degree of parallelism.
-
-Parallel queries produce the most benefit if the data is evenly distributed across all partitions with respect to the query. If the partitioned collection is partitioned so that all or most of the data returned by a query is concentrated in a few partitions (one partition is the worst case), those partitions will bottleneck the performance of the query.
-
-***Tuning MaxBufferedItemCount***
-
-Parallel query is designed to pre-fetch results while the current batch of results is being processed by the client. This pre-fetching helps improve the overall latency of a query. The `MaxBufferedItemCount` parameter limits the number of pre-fetched results. Set `MaxBufferedItemCount` to the expected number of results returned (or a higher number) to allow the query to receive the maximum benefit from pre-fetching.
-
-Pre-fetching works the same way regardless of the degree of parallelism, and there's a single buffer for the data from all partitions.
- **Implement backoff at RetryAfter intervals** During performance testing, you should increase load until a small rate of requests are throttled. If requests are throttled, the client application should back off on throttle for the server-specified retry interval. Respecting the backoff ensures you spend a minimal amount of time waiting between retries.
readDocument.RequestDiagnosticsString
Cache document URIs whenever possible for the best read performance. You need to define logic to cache the resource ID when you create a resource. Lookups based on resource IDs are faster than name-based lookups, so caching these values improves performance.
-**Tune the page size for queries/read feeds for better performance**
-
-When you do a bulk read of documents by using read feed functionality (for example, `ReadDocumentFeedAsync`) or when you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first.
-
-To reduce the number of network round trips required to retrieve all applicable results, you can increase the page size by using [x-ms-max-item-count](/rest/api/cosmos-db/common-cosmosdb-rest-request-headers) to request as many as 1,000 headers. When you need to display only a few results, for example, if your user interface or application API returns only 10 results at a time, you can also decrease the page size to 10 to reduce the throughput consumed for reads and queries.
-
-> [!NOTE]
-> The `maxItemCount` property shouldn't be used just for pagination. Its main use is to improve the performance of queries by reducing the maximum number of items returned in a single page.
-
-You can also set the page size by using the available Azure Cosmos DB SDKs. The [MaxItemCount](/dotnet/api/microsoft.azure.documents.client.feedoptions.maxitemcount) property in `FeedOptions` allows you to set the maximum number of items to be returned in the enumeration operation. When `maxItemCount` is set to -1, the SDK automatically finds the optimal value, depending on the document size. For example:
-
-```csharp
-IQueryable<dynamic> authorResults = client.CreateDocumentQuery(documentCollection.SelfLink, "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'", new FeedOptions { MaxItemCount = 1000 });
-```
-
-When a query is executed, the resulting data is sent within a TCP packet. If you specify too low a value for `maxItemCount`, the number of trips required to send the data within the TCP packet is high, which affects performance. So if you're not sure what value to set for the `maxItemCount` property, it's best to set it to -1 and let the SDK choose the default value.
- **Increase the number of threads/tasks** See [Increase the number of threads/tasks](#increase-threads) in the networking section of this article.
+## Query operations
+
+For query operations see the [performance tips for queries](performance-tips-query-sdk.md?tabs=v2&pivots=programming-language-csharp).
+ ## Indexing policy **Exclude unused paths from indexing for faster writes**
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4. Previously updated : 02/28/2022 Last updated : 04/01/2022 ms.devlang: java
CosmosPagedIterable<Family> filteredFamilies = container.queryItems(sql, new Cos
// Add handler to capture diagnostics filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
- logger.info("Query Item diagnostics through handler : {}",
+ logger.info("Query Item diagnostics through handle : {}",
familyFeedResponse.getCosmosDiagnostics()); });
filteredFamilies.iterableByPage().forEach(familyFeedResponse -> {
}); ```
+#### Cosmos Exceptions
+
+```Java
+try {
+ CosmosItemResponse<Family> familyCosmosItemResponse = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+} catch (CosmosException ex) {
+ CosmosDiagnostics diagnostics = ex.getDiagnostics();
+ logger.error("Read item failure diagnostics : {}", diagnostics);
+}
+```
+ # [Async](#tab/async) #### Database Operations ```Java Mono<CosmosDatabaseResponse> databaseResponseMono = client.createDatabaseIfNotExists(databaseName);
-CosmosDatabaseResponse cosmosDatabaseResponse = databaseResponseMono.block();
-
-CosmosDiagnostics diagnostics = cosmosDatabaseResponse.getDiagnostics();
-logger.info("Create database diagnostics : {}", diagnostics);
+databaseResponseMono.map(databaseResponse -> {
+ CosmosDiagnostics diagnostics = databaseResponse.getDiagnostics();
+ logger.info("Create database diagnostics : {}", diagnostics);
+}).subscribe();
``` #### Container Operations
logger.info("Create database diagnostics : {}", diagnostics);
```Java Mono<CosmosContainerResponse> containerResponseMono = database.createContainerIfNotExists(containerProperties, throughputProperties);
-CosmosContainerResponse cosmosContainerResponse = containerResponseMono.block();
-CosmosDiagnostics diagnostics = cosmosContainerResponse.getDiagnostics();
-logger.info("Create container diagnostics : {}", diagnostics);
+containerResponseMono.map(containerResponse -> {
+ CosmosDiagnostics diagnostics = containerResponse.getDiagnostics();
+ logger.info("Create container diagnostics : {}", diagnostics);
+}).subscribe();
``` #### Item Operations
Mono<CosmosItemResponse<Family>> itemResponseMono = container.createItem(family,
new PartitionKey(family.getLastName()), new CosmosItemRequestOptions());
-CosmosItemResponse<Family> itemResponse = itemResponseMono.block();
-CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
-logger.info("Create item diagnostics : {}", diagnostics);
+itemResponseMono.map(itemResponse -> {
+ CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
+ logger.info("Create item diagnostics : {}", diagnostics);
+}).subscribe();
// Read Item Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId, new PartitionKey(documentLastName), Family.class);
-CosmosItemResponse<Family> familyCosmosItemResponse = itemResponseMono.block();
-CosmosDiagnostics diagnostics = familyCosmosItemResponse.getDiagnostics();
-logger.info("Read item diagnostics : {}", diagnostics);
+
+itemResponseMono.map(itemResponse -> {
+ CosmosDiagnostics diagnostics = itemResponse.getDiagnostics();
+ logger.info("Read item diagnostics : {}", diagnostics);
+}).subscribe();
``` #### Query Operations
CosmosPagedFlux<Family> filteredFamilies = container.queryItems(sql, new CosmosQ
Family.class); // Add handler to capture diagnostics filteredFamilies = filteredFamilies.handle(familyFeedResponse -> {
- logger.info("Query Item diagnostics through handler : {}",
+ logger.info("Query Item diagnostics through handle : {}",
familyFeedResponse.getCosmosDiagnostics()); }); // Or capture diagnostics through byPage() APIs.
-filteredFamilies.byPage().toIterable().forEach(familyFeedResponse -> {
- logger.info("Query item diagnostics through iterableByPage : {}",
+filteredFamilies.byPage().map(familyFeedResponse -> {
+ logger.info("Query item diagnostics through byPage : {}",
familyFeedResponse.getCosmosDiagnostics());
-});
+}).subscribe();
``` +
+#### Cosmos Exceptions
+
+```Java
+Mono<CosmosItemResponse<Family>> itemResponseMono = container.readItem(documentId,
+ new PartitionKey(documentLastName), Family.class);
+
+itemResponseMono.onErrorResume(throwable -> {
+ if (throwable instanceof CosmosException) {
+ CosmosException cosmosException = (CosmosException) throwable;
+ CosmosDiagnostics diagnostics = cosmosException.getDiagnostics();
+ logger.error("Read item failure diagnostics : {}", diagnostics);
+ }
+ return Mono.error(throwable);
+}).subscribe();
+```
## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md
Previously updated : 12/13/2021 Last updated : 04/01/2022 # Copy and transform data in Amazon Simple Storage Service using Azure Data Factory or Azure Synapse Analytics
This article outlines how to use Copy Activity to copy data from Amazon Simple S
This Amazon S3 connector is supported for the following activities: - [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)
+- [Mapping data flow](concepts-data-flow-overview.md)
- [Lookup activity](control-flow-lookup-activity.md) - [GetMetadata activity](control-flow-get-metadata-activity.md) - [Delete activity](delete-activity.md)
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
You can use this API to stream alerts from your **entire tenant** (and data from
- **Splunk Enterprise and Splunk Cloud** - [Use the Microsoft Graph Security API Add-On for Splunk](https://splunkbase.splunk.com/app/4564/) - **Power BI** - [Connect to the Microsoft Graph Security API in Power BI Desktop](/power-bi/connect-data/desktop-connect-graph-security).-- **ServiceNow** - [Install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/orlando-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html).
+- **ServiceNow** - [Install and configure the Microsoft Graph Security API application from the ServiceNow Store](https://docs.servicenow.com/bundle/sandiego-security-management/page/product/secops-integration-sir/secops-integration-ms-graph/task/ms-graph-install.html?cshalt=yes).
- **QRadar** - [Use IBM's Device Support Module for Microsoft Defender for Cloud via Microsoft Graph API](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_dsm_guide_ms_azure_security_center_overview.html). - **Palo Alto Networks**, **Anomali**, **Lookout**, **InSpark**, and more - [Use the Microsoft Graph Security API](https://www.microsoft.com/security/business/graph-security-api#office-MultiFeatureCarousel-09jr2ji).
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Security recommendations in Microsoft Defender for Cloud description: This document walks you through how recommendations in Microsoft Defender for Cloud help you protect your Azure resources and stay in compliance with security policies. Previously updated : 03/31/2022 Last updated : 04/03/2022 # Review your security recommendations
Defender for Cloud analyzes the security state of your resources to identify pot
1. Select a specific recommendation to view the recommendation details page.
- :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Recommendation details page." lightbox="./media/review-security-recommendations/recommendation-details-page-expanded.png":::
+ :::image type="content" source="./media/review-security-recommendations/recommendation-details-page.png" alt-text="Screenshot of the recommendation details page." lightbox="./media/review-security-recommendations/recommendation-details-page-expanded.png":::
1. For supported recommendations, the top toolbar shows any or all of the following buttons: - **Enforce** and **Deny** (see [Prevent misconfigurations with Enforce/Deny recommendations](prevent-misconfigurations.md)).
Recommendations can be downloaded to a CSV report from the Recommendations page.
1. Select **Download CSV report**.
- :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download CSV report from.":::
+ :::image type="content" source="media/review-security-recommendations/download-csv.png" alt-text="Screenshot showing you where to select the Download C S V report from.":::
You'll know the report is being prepared by the pop-up.
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
description: Description of Microsoft Defender for Cloud's secure score and its
Previously updated : 03/31/2022 Last updated : 04/03/2022 # Security posture for Microsoft Defender for Cloud
To get all the possible points for a security control, all of your resources mus
### Example scores for a control In this example:
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
You can recover the password for the on-premises management console, or the sens
:::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of entering enter the unique identifier and then selecting recover." lightbox="media/how-to-create-and-manage-users/enter-identifier.png":::
- > [!NOTE]
- > Don't alter the password recovery file. It's a signed file, and will not work if tampered with.
- 1. On the Password recovery screen, select **Upload**. **The Upload Password Recovery File** window will open. 1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
When signing into a preconfigured sensor for the first time, you'll need to perf
:::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box.":::
- > [!NOTE]
- > Don't alter the password recovery file. It's a signed file and won't work if you tamper with it.
- 1. On the **Password recovery** screen, select **Upload**. **The Upload Password Recovery File** window will open. 1. Select **Browse** to locate your `password_recovery.zip` file, or drag the `password_recovery.zip` to the window.
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
Title: Troubleshooting issues with artifacts
-description: Learn how to troubleshoot issues that occur when applying artifacts in an Azure DevTest Labs virtual machine.
+ Title: Troubleshoot artifact application
+description: Troubleshoot issues with applying artifacts on an Azure DevTest Labs virtual machine.
Previously updated : 11/04/2021 Last updated : 03/31/2022
-# Troubleshoot issues when applying artifacts in an Azure DevTest Labs virtual machine
+# Troubleshoot issues applying artifacts on DevTest Labs virtual machines
-Applying artifacts on a virtual machine can fail for various reasons. This article guides you through some of the methods to help identify possible causes.
+This article guides you through possible causes and troubleshooting steps for artifact failures on Azure DevTest Labs virtual machines (VMs).
-If you need more help at any point in this article, you can contact the Azure DevTest Labs (DTL) experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). You can also file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get Support.
+Artifacts are tools, actions, or software you can install on lab VMs during or after VM creation. Lab owners can [preselect mandatory artifacts](devtest-lab-mandatory-artifacts.md) to apply to all lab VMs at creation, and lab users can [apply artifacts to VMs](add-artifact-vm.md) that they own.
-> [!NOTE]
-> This article applies to both Windows and non-Windows virtual machines. While there are some differences, they will be called out explicitly in this article.
+There are several possible causes for artifacts failing to install or run correctly. When an artifact appears to stop responding, first try to determine where it's stuck. Artifact installation can be blocked during the initial request, or fail during request execution.
-## Quick troubleshooting steps
-Check that the VM is running. DevTest Labs requires the VM to be running and that the [Microsoft Azure Virtual Machine Agent (VM Agent)](../virtual-machines/extensions/agent-windows.md) is installed and ready.
+You can troubleshoot artifact failures from the Azure portal or from the VM where the artifact failed.
-> [!TIP]
-> In the **Azure portal**, navigate to the **Manage artifacts** page for the VM to see if the VM is ready for applying artifacts. You see a message at the very top of that page.
->
-> Using **Azure PowerShell**, inspect the flag **canApplyArtifacts**, which is returned only when you expand on a GET operation. See the following example command:
+## Troubleshoot artifact failures from the Azure portal
+
+If you can't apply an artifact to a VM, first check the following in the Azure portal:
+
+- Make sure that the VM is running.
+- Navigate to the **Artifacts** page for the lab VM to make sure the VM is ready for applying artifacts. If the Apply artifacts feature isn't available, you see a message at the top of the page.
+
+### Use a PowerShell command
+
+You can also use Azure PowerShell to determine whether the VM can apply artifacts. Inspect the flag `canApplyArtifacts`, which is returned when you expand on a `GET` operation. For example:
```powershell Select-AzSubscription -SubscriptionId $SubscriptionId | Out-Null
$vm = Get-AzResource `
$vm.Properties.canApplyArtifacts ```
-## Ways to troubleshoot
-You can troubleshoot VMs created using DevTest Labs and the Resource Manager deployment model by using one of the following methods:
+### Investigate the failed artifact
-- **Azure portal** - great if you need to quickly get a visual hint of what may be causing the issue.-- **Azure PowerShell** - if you're comfortable with a PowerShell prompt, quickly query DevTest Labs resources using the Azure PowerShell cmdlets.
+An artifact can stop responding, and finally appear as **Failed**. To investigate failed artifacts:
-> [!TIP]
-> For more information on how to review artifact execution within a VM, see [Diagnose artifact failures in the lab](devtest-lab-troubleshoot-artifact-failure.md).
+1. On your lab **Overview** page, from the list under **My virtual machines**, select the VM that has the artifact you want to investigate.
+1. On the VM **Overview** page, select **Artifacts** in the left navigation. The **Artifacts** page lists artifacts associated with the VM, and their status.
-## Symptoms, causes, and potential resolutions
+ ![Screenshot showing the list of artifacts and their status.](./media/devtest-lab-troubleshoot-apply-artifacts/artifact-list.png)
-### Artifact appears to stop responding
+1. Select the artifact that shows a **Failed** status. The artifact opens with an extension message that includes details about the artifact failure.
-An artifact appears to stop responding until a pre-defined timeout period expires, and the artifact is marked as **Failed**.
+ ![Screenshot of the error message for a failed artifact.](./media/devtest-lab-troubleshoot-apply-artifacts/artifact-failure.png)
-When an artifact appears to stop responding, first determine where it's stuck. An artifact can be blocked at any of the following steps during execution:
+### Inspect the Activity logs
-- **During the initial request**. DevTest Labs creates an Azure Resource Manager template to request the use of the Custom Script Extension (CSE). Behind the scenes, a resource group deployment is triggered. When an error at this level happens, you get details in the **Activity Logs** of the resource group for the VM in question.
- - You can access the activity log from the lab VM page navigation bar. When you select it, you see an entry for either **applying artifacts to virtual machine** (if the apply artifacts operation was triggered directly) or **Add or modify virtual machines** (if the applying artifacts operation was part of the VM creation process).
- - Look for errors under these entries. Sometimes, the error won't be tagged, so you'll have to investigate each entry.
- - When investigating the details of each entry, make sure to review the contents of the JSON payload. You may see an error at the bottom of that document.
+To install artifacts, DevTest Labs creates and deploys an Azure Resource Manager (ARM) template that requests use of the Custom Script Extension (CSE). An error at this level shows up in the **Activity logs** for the subscription and for the VM's resource group.
-- **When trying to execute the artifact**. It could be because of networking or storage issues. See the respective section later in this article for details. It can also happen because of the way the script is authored. For example:
- - A PowerShell script has **mandatory parameters**, but one fails to pass a value to it, either because you allow the user to leave it blank, or because you donΓÇÖt have a default value for the property in the artifactfile.json definition file. The script will stop responding because it's awaiting user input.
- - A PowerShell script **requires user input** as part of execution. Scripts must be written to work silently without requiring any user intervention.
+If an artifact failed to install, inspect the **Activity log** entries for either **Create or Update Virtual Machine Extension**, if you applied the artifact directly, or **Create or Update Virtual Machine**, if the artifact was being applied as part of VM creation. Look for failures under these entries. Sometimes you have to expand the entry to see the failure.
-- **VM Agent takes long to be ready**. When the VM first starts, or when the custom script extension first installs to serve the request to apply artifacts, the VM may require either upgrading the VM Agent or wait for the VM Agent to initialize. There may be services on which the VM Agent depends that are taking a long time to initialize. In such cases, see [Azure Virtual Machine Agent overview](../virtual-machines/extensions/agent-windows.md) for further troubleshooting.
+Select the failed entry to see the error details. On the failure page, select **JSON** to review the contents of the JSON payload. You can see the error at the end of the JSON document.
-### To verify if the artifact appears to stop responding because of the script
+### Investigate the private artifact repository and lab storage account
-1. Sign in to the virtual machine in question.
-2. Copy the script locally in the virtual machine or locate it on the virtual machine under `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\<version>`. It's the location where the artifact scripts are downloaded.
-3. Using an elevated command prompt, execute the script locally, providing the same parameter values used to cause the issue.
-4. Determine if the script suffers from any unwanted behavior. If so, either request an update to the artifact (if it is from the public repo); or, make the corrections yourself (if it is from your private repo).
+When DevTest Labs applies an artifact, it reads the artifact configuration and files from connected repositories. By default, DevTest Labs has access to the DevTest Labs [public Artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also connect a lab to a private repository to access custom artifacts. If a custom artifact fails to install, make sure the personal access token (PAT) for the private repository hasn't expired. If the PAT is expired, the artifact won't be listed, and any scripts that refer to artifacts from that repository will fail.
-> [!TIP]
-> You can correct issues with artifacts hosted in our [public repo](https://github.com/Azure/azure-devtestlab) and submit the changes for our review and approval. See the **Contributions** section in the [README.md](https://github.com/Azure/azure-devtestlab/blob/master/Artifacts/README.md) document.
->
-> For information about writing your own artifacts, see [AUTHORING.md](https://github.com/Azure/azure-devtestlab/blob/master/Artifacts/AUTHORING.md) document.
-
-### To verify if the artifact appears to stop responding because of the VM Agent:
-1. Sign in to the virtual machine in question.
-2. Using File Explorer navigate to **C:\WindowsAzure\logs**.
-3. Locate and open file **WaAppAgent.log**.
-4. Look for entries that show when the VM Agent starts and when it's finishing initialization: the first sent heartbeat. Favor newer entries or specifically the ones around the time period for which you experience the issue.
-
- ```
- [00000006] [11/14/2019 05:52:13.44] [INFO] WindowsAzureGuestAgent starting. Version 2.7.41491.949
- ...
- [00000006] [11/14/2019 05:52:31.77] [WARN] Waiting for OOBE to Complete ...
- ...
- [00000006] [11/14/2019 06:02:30.43] [WARN] Waiting for OOBE to Complete ...
- [00000006] [11/14/2019 06:02:33.43] [INFO] StateExecutor initialization completed.
- [00000020] [11/14/2019 06:02:33.43] [HEART] WindowsAzureGuestAgent Heartbeat.
- ```
- In this example, the VM Agent start time took 10 minutes and 20 seconds because a heartbeat was sent. The cause in this case was the OOBE service taking a long time to start.
+Depending on configuration, lab VMs might not have direct access to the artifact repository. DevTest Labs caches the artifacts in a lab storage account that's created when the lab first initializes. If access to this storage account is blocked, such as when traffic is blocked from the VM to the Azure Storage service, you might see an error similar to this:
-> [!TIP]
-> For general information about Azure extensions, see [Azure virtual machine extensions and features](../virtual-machines/extensions/overview.md).
+```shell
+CSE Error: Failed to download all specified files. Exiting. Exception: Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. > System.Net.WebException: The remote server returned an error: (403) Forbidden.
+```
-## Storage errors
-DevTest Labs requires access to the labΓÇÖs storage account that is created to cache artifacts. When DevTest Labs applies an artifact, it will read the artifact configuration and its files from the configured repositories. By default, DevTest Labs configures access to the **public artifact repo**.
+This error appears in the **Activity log** of the VM's resource group.
-Depending on how a VM is configured, it may not have direct access to this repo. By design, DevTest Labs caches the artifacts in a storage account that's created when the lab first initializes.
+To troubleshoot connectivity issues to the Azure Storage account:
-If access to this storage account is blocked in any way, as when traffic is blocked from the VM to the Azure Storage service, you may see an error similar to the following one:
+- Check for added network security groups (NSGs). If a subscription policy was added to automatically configure NSGs in all virtual networks, it would affect the virtual network used for creating lab VMs.
-```shell
-CSE Error: Failed to download all specified files. Exiting. Exception: Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (403) Forbidden. > System.Net.WebException: The remote server returned an error: (403) Forbidden.
-```
+- Verify NSG rules. Use [IP flow verify](../network-watcher/diagnose-vm-network-traffic-filtering-problem.md#use-ip-flow-verify) to determine whether an NSG rule is blocking traffic to or from a VM. You can also review effective security group rules to ensure that an inbound **Allow** NSG rule exists. For more information, see [Using effective security rules to troubleshoot VM traffic flow](/azure/virtual-network/diagnose-network-traffic-filter-problem).
-The above error would appear in the **Deployment Message** section in the **Artifact results** page under **Manage artifacts**. It will also appear in the **Activity Logs** under the resource group of the virtual machine in question.
+- Check the lab's default storage account. The default storage account is the first storage account created when the lab was created. The name usually starts with the letter "a" and ends with a multi-digit number, such as a\<labname>#.
-### To ensure communication to the Azure Storage service isn't being blocked:
+ 1. Navigate to the lab's resource group.
+ 1. Locate the resource of type **Storage account** whose name matches the convention.
+ 1. On the storage account **Overview** page, select **Firewalls and virtual networks** in the left navigation.
+ 1. Ensure that **Firewalls and virtual networks** is set to **All networks**. Or, if the **Selected networks** option is selected, make sure the lab's virtual networks used to create VMs are added to the list.
-- **Check for added network security groups (NSG)**. It may be that a subscription policy was added where NSGs are automatically configured in all virtual networks. It would also affect the lab's default virtual network, if used, or other virtual network configured in your lab, used for the creation of VMs.
+For in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security.md).
-- **Check the default labΓÇÖs storage account** (that is, the first storage account created when the lab was created, whose name usually starts with the letter ΓÇ£aΓÇ¥ and ends with a multi-digit number that is, a\<labname\>#).
- 1. Navigate to the labΓÇÖs resource group.
- 2. Locate the resource of type **storage account**, whose name matches the convention.
- 3. Navigate to the storage account page called **Firewalls and virtual networks**.
- 4. Ensure that it's set to **All networks**. If the **Selected networks** option is selected, then ensure that the labΓÇÖs virtual networks used to create VMs are added to the list.
+## Troubleshoot artifact failures from the lab VM
-For more in-depth troubleshooting, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
+You can connect to the lab VM where the artifact failed, and investigate the issue there.
-> [!TIP]
-> **Verify network security group rules**. Use [IP flow verify](../network-watcher/diagnose-vm-network-traffic-filtering-problem.md#use-ip-flow-verify) to confirm that a rule in a network security group is blocking traffic to or from a virtual machine. You can also review effective security group rules to ensure inbound **Allow** NSG rule exists. For more information, see [Using effective security rules to troubleshoot VM traffic flow](../virtual-network/diagnose-network-traffic-filter-problem.md).
+### Inspect the Custom Script Extension log file
+
+1. On the lab VM, go to *C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\\*1.10.12\*\\Status\\*, where *\*1.10.12\** is the CSE version number.
+
+ ![Screenshot of the Status folder on the lab V M.](./media/devtest-lab-troubleshoot-apply-artifacts/status-folder.png)
-## Other sources of error
-There are other infrequent possible sources of error. Make sure to evaluate each to see if it applies to your case. Here's one of them:
+1. Open and inspect the *STATUS* file to view the error.
-- **Expired personal access token for the private repo**. When expired, the artifact won't get listed and any scripts that refer to artifacts from the repository will fail.
+For instructions on finding the log files on a **Linux** VM, see [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](/azure/virtual-machines/extensions/custom-script-linux#troubleshooting).
+
+### Check the VM Agent
+
+Ensure that the [Azure Virtual Machine Agent (VM Agent)](/azure/virtual-machines/extensions/agent-windows) is installed and ready.
+
+When the VM first starts, or when the CSE first installs to serve the request to apply artifacts, the VM might need to either upgrade the VM Agent or wait for the VM Agent to initialize. The VM Agent might depend on services that take a long time to initialize. For further troubleshooting, see [Azure Virtual Machine Agent overview](/azure/virtual-machines/extensions/agent-windows).
+
+To verify if the artifact appeared to stop responding because of the VM Agent:
+
+1. On the lab VM, navigate to *C:\\WindowsAzure\\logs*.
+1. Open the file *WaAppAgent.log*.
+1. Look for entries that show the VM Agent starting, finishing initialization, and the first sent heartbeat, around the time you experienced the artifact issue.
+
+ ```text
+ [00000006] [11/14/2019 05:52:13.44] [INFO] WindowsAzureGuestAgent starting. Version 2.7.41491.949
+ ...
+ [00000006] [11/14/2019 05:52:31.77] [WARN] Waiting for OOBE to Complete ...
+ ...
+ [00000006] [11/14/2019 06:02:30.43] [WARN] Waiting for OOBE to Complete ...
+ [00000006] [11/14/2019 06:02:33.43] [INFO] StateExecutor initialization completed.
+ [00000020] [11/14/2019 06:02:33.43] [HEART] WindowsAzureGuestAgent Heartbeat.
+ ```
+
+In the previous example, the VM Agent took 10 minutes and 20 seconds to start. The cause was the OOBE service taking a long time to start.
+
+For general information about Azure extensions, see [Azure virtual machine extensions and features](/azure/virtual-machines/extensions/overview).
+
+### Investigate script issues
+
+The artifact installation could fail because of the way the artifact installation script is authored. For example:
+
+- The script has mandatory parameters, but fails to pass a value, either by allowing the user to leave it blank, or because there's no default value in the *artifactfile.json* definition file. The script stops responding because it's awaiting user input.
+
+- The script requires user input as part of execution. Scripts should work silently without requiring user intervention.
+
+To troubleshoot whether the script is causing the artifact to appear to stop responding:
+
+1. Copy the script to the VM, or locate it on the VM in the artifact script download location, *C:\\Packages\\Plugins\\Microsoft.Compute.CustomScriptExtension\\1.10.12\\Downloads*.
+1. Using an administrative command prompt, run the script on the VM, providing the same parameter values that caused the issue.
+1. Determine if the script shows any unwanted behavior. If so, request an update, or correct the script.
+
+> [!TIP]
+> You can submit proposed script corrections for artifacts hosted in the DevTest Labs [public repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). For details, see the **Contributions** section in the [README](https://github.com/Azure/azure-devtestlab/blob/master/Artifacts/README.md) document.
+
+> [!NOTE]
+> A custom artifact needs to have the proper structure. For information about how to correctly construct an artifact, see [Create custom artifacts](devtest-lab-artifact-author.md). For an example of a properly structured artifact, see the [Test parameter types](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts/windows-test-paramtypes) artifact.
+>
+> For more information about writing and correcting artifact scripts, see [AUTHORING](https://github.com/Azure/azure-devtestlab/blob/master/Artifacts/AUTHORING.md).
## Next steps
-If you don't see your problem here or you can't resolve your issue, try one of the following channels for support:
+If you need more help, try one of the following support channels:
-* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).
-* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+- Contact the Azure DevTest Labs experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/).
+- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
+- Go to the [Azure support site](https://azure.microsoft.com/support/options) and select **Get Support** to file an Azure support incident.
devtest-labs Devtest Lab Troubleshoot Artifact Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-troubleshoot-artifact-failure.md
- Title: Diagnose artifact failures in an Azure DevTest Labs virtual machine
-description: DevTest Labs provide information that you can use to diagnose an artifact failure. This article shows you how to troubleshoot artifact failures.
--- Previously updated : 06/26/2020--
-# Diagnose artifact failures in the lab
-After you have created an artifact, you can check to see whether it succeeded or failed. Artifact logs in Azure DevTest Labs provide information that you can use to diagnose an artifact failure. You have a couple of options for viewing the artifact log information for a Windows VM:
-
-* In the Azure portal
-* In the VM
-
-> [!NOTE]
-> To ensure that failures are correctly identified and explained, it's important that the artifact has the proper structure. For information about how to correctly construct an artifact, see [Create custom artifacts](devtest-lab-artifact-author.md). To see an example of a properly structured artifact, check out the [Test parameter types](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts/windows-test-paramtypes) artifact.
-
-## Troubleshoot artifact failures by using the Azure portal
-
-1. In the Azure portal, in your list of resources, select your lab.
-2. Choose the Windows VM that includes the artifact that you want to investigate.
-3. In the left panel, under **GENERAL**, select **Artifacts**. A list of artifacts associated with that VM appears. The name of the artifact and the artifact status are indicated.
-
- ![Artifact status](./media/devtest-lab-troubleshoot-artifact-failure/devtest-lab-artifacts-failure-new.png)
-
-4. Select an artifact that shows a **Failed** status. The artifact opens. An extension message that includes details about the failure of the artifact is displayed.
-
- ![Artifact error message](./media/devtest-lab-troubleshoot-artifact-failure/devtest-lab-artifact-error.png)
--
-## Troubleshoot artifact failures from within the virtual machine
-
-1. Sign in to the VM that contains the artifact you want to diagnose.
-2. Go to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\\*1.9*\Status, where *1.9* is the Azure Custom Script Extension version number.
-
- ![The Status file](./media/devtest-lab-troubleshoot-artifact-failure/devtest-lab-artifact-error-vm-status-new.png)
-
-3. Open the **status** file.
-
-For instructions on finding the log files on a **Linux** VM, see the following article: [Use the Azure Custom Script Extension Version 2 with Linux virtual machines](../virtual-machines/extensions/custom-script-linux.md#troubleshooting)
--
-## Related blog posts
-* [Join a VM to an existing Active Directory domain by using a Resource Manager template in DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/Join-a-VM-to-existing-AD-domain-using-ARM-template-AzureDevTestLabs)
-
-## Next steps
-* Learn how to [add a Git repository to a lab](devtest-lab-add-artifact-repo.md).
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-paas-services.md
Last updated 03/22/2022
This article describes platform-as-a-service (PaaS) support in Azure DevTest Labs. DevTest Labs supports PaaS through *environments*, which can include both PaaS and infrastructure-as-a-service (IaaS) resources. Environments contain services and software like virtual machines (VMs), databases, virtual networks, and web apps that are customized to work together.
-DevTest Labs creates environments by using preconfigured Azure Resource Manager (ARM) templates from Git repositories. Keeping the ARM templates under source control promotes consistent environment deployment and management.
- The following image shows a SharePoint farm created as an environment in a lab. ![Screenshot of a SharePoint environment in a lab.](media/use-paas-services/environments.png)
Lab owners can customize resource access or permissions without granting subscri
## Environment templates
+DevTest Labs creates environments by using preconfigured Azure Resource Manager (ARM) templates from Git repositories. Keeping the ARM templates under source control promotes consistent environment deployment and management.
+ In large organizations, development teams typically provide customized or isolated testing environments. The IT group provides environments that all teams within a business unit or a division can use. To enable and configure environment creation for labs, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md). DevTest Labs has a public repository of preconfigured ARM templates for creating certain environments. For more information about the public environments, see [Enable and configure public environments](devtest-lab-create-environment-from-arm.md#enable-and-configure-public-environments).
-You can also create or configure your own ARM environment templates, store them in private Git repositories, and connect those repositories to labs. For more information, see [Use Azure Resource Manager (ARM) templates in Azure DevTest Labs](devtest-lab-use-arm-and-powershell-for-lab-resources.md).
+You can also [create or configure your own ARM templates](devtest-lab-use-resource-manager-template.md#store-arm-templates-in-git-repositories), [store them in private Git repositories](devtest-lab-use-resource-manager-template.md#store-arm-templates-in-git-repositories), and [connect those repositories to labs](devtest-lab-use-resource-manager-template.md#store-arm-templates-in-git-repositories).
## Template customization
You can provide certain custom lab information in ARM templates when creating en
- Lab location - Lab storage account where the ARM templates files are copied
-### Existing virtual network
+### Use an existing virtual network
When you create an environment, DevTest Labs can replace the `$(LabSubnetId)` token with the first lab subnet where **Use in virtual machine creation** is set to **true**. This modification allows the environment to use previously created virtual networks. [Connect environments to the lab's virtual network](connect-environment-lab-virtual-network.md) describes how to modify an ARM template to use the `$(LabSubnetId)` token. To use the same ARM template in test, staging, and production environments, use `$(LabSubnetId)` as a value in an ARM template parameter.
-### Nested templates
+### Use nested templates
DevTest Labs supports [nested ARM templates](/azure/azure-resource-manager/templates/linked-templates). To use `_artifactsLocation` and `_artifactsLocationSasToken` tokens to create a URI to a nested ARM template, see [Deploy DevTest Labs environments by using nested templates](deploy-nested-template-environments.md). For more information, see the **Deployment artifacts** section of the [Azure Resource Manager Best Practices Guide](https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/best-practices.md#deployment-artifacts-nested-templates-scripts).
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/guest-configuration.md
and Arc-enabled servers, review the following details.
Before you can use the guest configuration feature of Azure Policy, you must register the `Microsoft.GuestConfiguration` resource provider. If assignment of a guest configuration policy is done through the portal, or if the subscription
-is enrolled in Azure Security Center, the resource provider is registered
+is enrolled in Microsoft Defender for Cloud, the resource provider is registered
automatically. You can manually register through the [portal](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [Azure PowerShell](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell),
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
For an overview of the extensions platform, see [Azure Arc cluster extensions](.
```azurepowershell-interactive # Log in first with Connect-AzAccount if you're not using Cloud Shell
-
+ # Provider register: Register the Azure Policy provider Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights' ```
For an overview of the extensions platform, see [Azure Arc cluster extensions](.
> Note the following for Azure Policy extension creation: > - Auto-upgrade is enabled by default which will update Azure Policy extension minor version if any new changes are deployed. > - Any proxy variables passed as parameters to `connectedk8s` will be propagated to the Azure Policy extension to support outbound proxy.
->
+>
To create an extension instance, for your Arc enabled cluster, run the following command substituting `<>` with your values: ```azurecli-interactive
field of the failed constraint. For details on _Non-compliant_ resources, see
Some other considerations: -- If the cluster subscription is registered with Azure Security Center, then Azure Security Center
+- If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud
Kubernetes policies are applied on the cluster automatically. - When a deny policy is applied on cluster with existing Kubernetes resources, any pre-existing
Some other considerations:
> review the CRD for the following or a similar declaration: > > ```rego
-> input_containers[c] {
-> c := input.review.object.spec.initContainers[_]
+> input_containers[c] {
+> c := input.review.object.spec.initContainers[_]
> } > ```
governance Regulatory Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/regulatory-compliance.md
definition as a reference, it's recommended to monitor the source of the Regulat
definitions in the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy/tree/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance).
-To link a custom Regulatory Compliance initiative to your Azure Security Center dashboard, see
-[Azure Security Center - Using custom security policies](../../../security-center/custom-security-policies.md).
+To link a custom Regulatory Compliance initiative to your Microsoft Defender for Cloud dashboard, see
+[Create custom security initiatives and policies](../../../defender-for-cloud/custom-security-policies.md).
## Regulatory Compliance in portal
governance Create And Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-and-manage.md
overview](../overview.md).
following built-in policy definitions by selecting the checkbox next to the policy definition: - Allowed locations
- - Monitor missing Endpoint Protection in Azure Security Center
+ - Endpoint protection should be installed on machines
- Non-internet-facing virtual machines should be protected with network security groups - Azure Backup should be enabled for Virtual Machines - Disk encryption should be applied on virtual machines
governance Policy Devops Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-devops-pipelines.md
and [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline).
1. In Azure DevOps, create a release pipeline that contains at least one stage, or open an existing release pipeline. 1. Add a pre- or post-deployment condition that includes the **Security and compliance assessment** task as a gate.
- [More details](/azure/devops/pipelines/release/deploy-using-approvals.md#configure-gate).
+ [More details](/azure/devops/pipelines/release/deploy-using-approvals#set-up-gates).
![Screenshot of Azure Policy Gate.](../media/devops-policy/azure-policy-gate.png)
load-testing How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-assign-roles.md
Azure Load Testing resources have three built-in roles that are available by def
If you have the **Owner**, **Contributor**, or **Load Test Owner** role at the subscription level, you automatically have the same permissions as the **Load Test Owner** at the resource level.
+You'll encounter this message if your account doesn't have the necessary permissions to manage tests.
++ > [!IMPORTANT] > Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a resource may not have owner access to the resource group that contains the resource. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
search Query Lucene Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md
You can embed Boolean operators in a query string to improve the precision of a
|Text operator | Character | Example | Usage | |--|-- |--|-|
-| AND | `&`, `+` | `wifi + luxury` | Specifies terms that a match must contain. In the example, the query engine will look for documents containing both `wifi` and `luxury`. The plus character (`+`) is used for required terms. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document.|
-| OR | `|` | `wifi | luxury` | Finds a match when either term is found. In the example, the query engine will return match on documents containing either `wifi` or `luxury` or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi | luxury`.|
-| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns matches on documents that exclude the term. For example, `wifi ΓÇôluxury` will search for documents that have the `wifi` term but not `luxury`. <br/><br/>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no `+` or `|` operator on the other terms). Valid values include `any` or `all`. <br/><br/>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that don't contain the term `luxury`. <br/><br/>`searchMode=all` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and don't contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.<br/><br/>When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| AND | `+` | `wifi AND luxury` | Specifies terms that a match must contain. In the example, the query engine will look for documents containing both `wifi` and `luxury`. The plus character (`+`) can also be used directly in front of a term to make it reuqired. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document.|
+| OR | | `wifi OR luxury` | Finds a match when either term is found. In the example, the query engine will return match on documents containing either `wifi` or `luxury` or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi OR luxury`.|
+| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns matches on documents that exclude the term. For example, `wifi ΓÇôluxury` will search for documents that have the `wifi` term but not `luxury`. <br/><br/>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. <br/><br/>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `wifi -luxury` will match documents that either contain the term `wifi` or those that don't contain the term `luxury`. <br/><br/>`searchMode=all` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". For example, `wifi -luxury` will match documents that contain the term `wifi` and don't contain the term "luxury". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.<br/><br/>When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
## <a name="bkmk_fields"></a> Fielded search
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
You can embed Boolean operators in a query string to improve the precision of a
|-- |--|-| | `+` | `pool + ocean` | An AND operation. For example, `pool + ocean` stipulates that a document must contain both terms.| | `|` | `pool | ocean` | An OR operation finds a match when either term is found. In the example, the query engine will return match on documents containing either `pool` or `ocean` or both. Because OR is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
-| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p>To get the expected behavior on a NOT expression, consider setting `"searchMode=all"` on the request. Otherwise, under the default of `"searchMode=any"`, you will get matches on `pool`, plus matches on all documents that do not contain `ocean`, which could be a lot of documents. The "searchMode" parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there is no `+` or `|` operator on the other terms). Using `"searchMode=all"` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". </p>When deciding on a "searchMode" setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p>To get the expected behavior on a NOT expression, set `"searchMode=all"` on the request. Otherwise, under the default of `"searchMode=any"`, you will get matches on `pool`, plus matches on all documents that do not contain `ocean`, which could be a lot of documents. The "searchMode" parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there is no `+` or `|` operator on the other terms). Using `"searchMode=all"` increases the precision of queries by including fewer results, and by default - will be interpreted as "AND NOT". </p>When deciding on a "searchMode" setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
<a name="prefix-search"></a>
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md
AzureDiagnosticsΓÇ»
## Set up alerts - examples
-You can set up Site Recovery alerts based on Azure Monitor data. [Learn more](../azure-monitor/alerts/alerts-log.md#create-a-log-alert-rule-in-the-azure-portal) about setting up log alerts.
+You can set up Site Recovery alerts based on Azure Monitor data. [Learn more](../azure-monitor/alerts/alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal) about setting up log alerts.
> [!NOTE] > Some of the examples use **replicationProviderName_s** set to **A2A**. This sets alerts for Azure VMs that are replicated to a secondary Azure region. In these examples, you can replace **A2A** with **InMageAzureV2** if you want to set alerts for on-premises VMware VMs or physical servers replicated to Azure.
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-cli.md
az storage container generate-sas \
## Next steps
-In this how-to article, you learned how to manage containers in Azure blob storage. To learn more about working with blob storage by using Azure CLI, explore Azure CLI samples for Blob storage.
+In this how-to article, you learned how to manage containers in Azure blob storage. To learn more about working with blob storage by using Azure CLI, select an option below.
+
+> [!div class="nextstepaction"]
+> [Manage block blobs with Azure CLI](blob-cli.md)
> [!div class="nextstepaction"] > [Azure CLI samples for Blob storage](storage-samples-blobs-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 03/12/2022 Last updated : 03/31/2022
You can use unmanaged disks in storage accounts with network rules applied to ba
## Change the default network access rule
-By default, storage accounts accept connections from clients on any network. To limit access to selected networks, you must first change the default action.
+By default, storage accounts accept connections from clients on any network. You can limit access to selected networks **or** prevent traffic from all networks and permit access only through a [private endpoint](storage-private-endpoints.md).
> [!WARNING]
-> Making changes to network rules can impact your applications' ability to connect to Azure Storage. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access.
-
-### Managing default network access rules
-
-You can manage default network access rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
-
-#### [Portal](#tab/azure-portal)
+> Changing this setting can impact your application's ability to connect to Azure Storage. Make sure to grant access to any allowed networks or set up access through a [private endpoint](storage-private-endpoints.md) before you change this setting.
+
+### [Portal](#tab/azure-portal)
1. Go to the storage account you want to secure.
-2. Select on the settings menu called **Networking**.
+2. Locate the **Networking** settings under **Security + networking**.
-3. To deny access by default, choose to allow access from **Selected networks**. To allow traffic from all networks, choose to allow access from **All networks**.
+3. Choose which type of public network access you want to allow.
+
+ - To allow traffic from all networks, select **Enabled from all networks**.
+
+ - To allow traffic only from specific virtual networks, select **Enabled from selected virtual networks and IP addresses**.
+
+ - To block traffic from all networks, use PowerShell or the Azure CLI. This setting does not yet appear in the Azure Portal.
4. Select **Save** to apply your changes. <a id="powershell"></a>
-#### [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
1. Install the [Azure PowerShell](/powershell/azure/install-Az-ps) and [sign in](/powershell/azure/authenticate-azureps).
-2. Display the status of the default rule for the storage account.
-
- ```powershell
- (Get-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -AccountName "mystorageaccount").DefaultAction
- ```
+2. Choose which type of public network access you want to allow.
-3. Set the default rule to deny network access by default.
+ - To allow traffic from all networks, use the `Update-AzStorageAccountNetworkRuleSet` command, and set the `-DefaultAction` parameter to `Allow`.
- ```powershell
- Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Deny
- ```
+ ```powershell
+ Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Allow
+ ```
+
+ - To allow traffic only from specific virtual networks, use the `Update-AzStorageAccountNetworkRuleSet` command and set the `-DefaultAction` parameter to `Deny`.
-4. Set the default rule to allow network access by default.
+ ```powershell
+ Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Deny
+ ```
- ```powershell
- Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -DefaultAction Allow
- ```
+ - To block traffic from all networks, use the `Set-AzStorageAccount` command and set the `-PublicNetworkAccess` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
+
+ ```powershell
+ Set-AzStorageAccount -ResourceGroupName "myresourcegroup" -Name "mystorageaccount" -PublicNetworkAccess Disabled
+ ```
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli).
-2. Display the status of the default rule for the storage account.
+2. Choose which type of public network access you want to allow.
- ```azurecli
- az storage account show --resource-group "myresourcegroup" --name "mystorageaccount" --query networkRuleSet.defaultAction
- ```
+ - To allow traffic from all networks, use the `az storage account update` command, and set the `--default-action` parameter to `Allow`.
-3. Set the default rule to deny network access by default.
+ ```azurecli
+ az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Allow
+ ```
+
+ - To allow traffic only from specific virtual networks, use the `az storage account update` command and set the `--default-action` parameter to `Deny`.
- ```azurecli
- az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
- ```
+ ```azurecli
+ az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Deny
+ ```
-4. Set the default rule to allow network access by default.
+ - To block traffic from all networks, use the `az storage account update` command and set the `--public-network-access` parameter to `Disabled`. Traffic will be allowed only through a [private endpoint](storage-private-endpoints.md). You'll have to create that private endpoint.
- ```azurecli
- az storage account update --resource-group "myresourcegroup" --name "mystorageaccount" --default-action Allow
- ```
+ ```azurecli
+ az storage account update --name MyStorageAccount --resource-group MyResourceGroup --public-network-access Disabled
+ ```
time-series-insights Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-storage.md
Don't delete your Azure Time Series Insights Gen2 files. Manage related data fro
Parquet is an open-source columnar file format designed for efficient storage and performance. Azure Time Series Insights Gen2 uses Parquet to enable Time Series ID-based query performance at scale.
-For more information about the Parquet file type, read the [Parquet documentation](https://parquet.apache.org/documentation/latest/).
+For more information about the Parquet file type, read the [Parquet documentation](https://parquet.apache.org/docs/).
Azure Time Series Insights Gen2 stores copies of your data as follows:
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv3-series.md
Base All-Core Frequency: 2.8 GHz<br>
| Size | Physical Cores | Memory GB | Temp storage (SSD) GiB | Max data disks | Max NICs | EPC Memory GB | ||-|-||-|||
-| Standard_DC1s_v3 | 1 | 8 | N/A | 4 | 2 | 4 |
-| Standard_DC2s_v3 | 2 | 16 | N/A | 8 | 2 | 8 |
-| Standard_DC4s_v3 | 4 | 32 | N/A | 16 | 4 | 16 |
-| Standard_DC8s_v3 | 8 | 64 | N/A | 32 | 8 | 32 |
-| Standard_DC16s_v3 | 16 | 128 | N/A | 32 | 8 | 64 |
-| Standard_DC24s_v3 | 24 | 192 | N/A | 32 | 8 | 128 |
-| Standard_DC32s_v3 | 32 | 256 | N/A | 32 | 8 | 192 |
-| Standard_DC48s_v3 | 48 | 384 | N/A | 32 | 8 | 256 |
+| Standard_DC1s_v3 | 1 | 8 | Remote Storage Only | 4 | 2 | 4 |
+| Standard_DC2s_v3 | 2 | 16 | Remote Storage Only | 8 | 2 | 8 |
+| Standard_DC4s_v3 | 4 | 32 | Remote Storage Only | 16 | 4 | 16 |
+| Standard_DC8s_v3 | 8 | 64 | Remote Storage Only | 32 | 8 | 32 |
+| Standard_DC16s_v3 | 16 | 128 | Remote Storage Only | 32 | 8 | 64 |
+| Standard_DC24s_v3 | 24 | 192 | Remote Storage Only | 32 | 8 | 128 |
+| Standard_DC32s_v3 | 32 | 256 | Remote Storage Only | 32 | 8 | 192 |
+| Standard_DC48s_v3 | 48 | 384 | Remote Storage Only | 32 | 8 | 256 |
## DCdsv3-series Technical specifications