Updates from: 01/28/2022 02:07:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-monitor.md
After you deploy the template, it can take a few minutes (typically no more than
After you've deployed the template and waited a few minutes for the resource projection to complete, follow these steps to associate your subscription with your Azure AD B2C directory.
+> [!NOTE]
+> On the **Portal settings | Directories + subscriptions** page, ensure that your Azure AD B2C and Azure AD tenants are selected under **Current + delegated directories**.
+ 1. Sign out of the Azure portal if you're currently signed in (this allows your session credentials to be refreshed in the next step). 1. Sign in to the [Azure portal](https://portal.azure.com) with your **Azure AD B2C** administrative account. This account must be a member of the security group you specified in the [Delegate resource management](#3-delegate-resource-management) step. 1. Select the **Directories + subscriptions** icon in the portal toolbar.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 01/25/2022 Last updated : 01/26/2022 -+
Azure AD Conditional Access supports the following device platforms:
- iOS - Windows - macOS
+- Linux (Preview)
If you block legacy authentication using the **Other clients** condition, you can also set the device platform condition. > [!IMPORTANT]
-> Microsoft recommends that you have a Conditional Access policy for unsupported device platforms. As an example, if you want to block access to your corporate resources from Linux or any other unsupported clients, you should configure a policy with a Device platforms condition that includes any device and excludes supported device platforms and Grant control set to Block access.
+> Microsoft recommends that you have a Conditional Access policy for unsupported device platforms. As an example, if you want to block access to your corporate resources from **Chrome OS** or any other unsupported clients, you should configure a policy with a Device platforms condition that includes any device and excludes supported device platforms and Grant control set to Block access.
## Locations
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Previously updated : 11/04/2021 Last updated : 01/27/2022
Selecting this checkbox will require users to perform Azure AD Multi-Factor Auth
### Require device to be marked as compliant
-Organizations who have deployed Microsoft Intune can use the information returned from their devices to identify devices that meet specific compliance requirements. This policy compliance information is forwarded from Intune to Azure AD where Conditional Access can make decisions to grant or block access to resources. For more information about compliance policies, see the article [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started).
+Organizations who have deployed Microsoft Intune can use the information returned from their devices to identify devices that meet specific compliance requirements. Policy compliance information is sent from Intune to Azure AD so Conditional Access can decide to grant or block access to resources. For more information about compliance policies, see the article [Set rules on devices to allow access to resources in your organization using Intune](/intune/protect/device-compliance-get-started).
A device can be marked as compliant by Intune (for any device OS) or by third-party MDM system for Windows 10 devices. A list of supported third-party MDM systems can be found in the article [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
Devices must be registered in Azure AD before they can be marked as compliant. M
**Remarks** - The **Require device to be marked as compliant** requirement:
- - Only supports Windows Windows current (Windows 10+), iOS, Android and macOS devices registered with Azure AD and enrolled with Intune.
+ - Only supports Windows 10+, iOS, Android, and macOS devices registered with Azure AD and enrolled with Intune.
- For devices enrolled with third-party MDM systems, see [Support third-party device compliance partners in Intune](/mem/intune/protect/device-compliance-partners).
- - Conditional Access cannot consider Microsoft Edge in InPrivate mode as a compliant device.
+ - Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode as a compliant device.
> [!NOTE] > On Windows 7, iOS, Android, macOS, and some third-party web browsers Azure AD identifies the device using a client certificate that is provisioned when the device is registered with Azure AD. When a user first signs in through the browser the user is prompted to select the certificate. The end user must select this certificate before they can continue to use the browser.
Devices must be registered in Azure AD before they can be marked as compliant. M
Organizations can choose to use the device identity as part of their Conditional Access policy. Organizations can require that devices are hybrid Azure AD joined using this checkbox. For more information about device identities, see the article [What is a device identity?](../devices/overview.md).
-When using the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the require managed device grant control or a device state condition are not supported. This is because the device performing authentication cannot provide its device state to the device providing a code and the device state in the token is locked to the device performing authentication. Use the require multi-factor authentication grant control instead.
+When using the [device-code OAuth flow](../develop/v2-oauth2-device-code.md), the require managed device grant control or a device state condition arenΓÇÖt supported. This is because the device performing authentication canΓÇÖt provide its device state to the device providing a code and the device state in the token is locked to the device performing authentication. Use the require multi-factor authentication grant control instead.
**Remarks** - The **Require hybrid Azure AD joined device** requirement: - Only supports domain joined Windows down-level (pre Windows 10) and Windows current (Windows 10+) devices.
- - Conditional Access cannot consider Microsoft Edge in InPrivate mode as a hybrid Azure AD joined device.
+ - Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode as a hybrid Azure AD joined device.
### Require approved client app Organizations can require that an access attempt to the selected cloud apps needs to be made from an approved client app. These approved client apps support [Intune app protection policies](/intune/app-protection-policy) independent of any mobile-device management (MDM) solution.
-In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be the Microsoft Authenticator for iOS, or either the Microsoft Authenticator or Microsoft Company portal for Android devices. If a broker app is not installed on the device when the user attempts to authenticate, the user gets redirected to the appropriate app store to install the required broker app.
+In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be the Microsoft Authenticator for iOS, or either the Microsoft Authenticator or Microsoft Company portal for Android devices. If a broker app isnΓÇÖt installed on the device when the user attempts to authenticate, the user gets redirected to the appropriate app store to install the required broker app.
The following client apps have been confirmed to support this setting:
The following client apps have been confirmed to support this setting:
- The **Require approved client app** requirement: - Only supports the iOS and Android for device platform condition. - A broker app is required to register the device. The broker app can be the Microsoft Authenticator for iOS, or either the Microsoft Authenticator or Microsoft Company portal for Android devices.-- Conditional Access cannot consider Microsoft Edge in InPrivate mode an approved client app.-- Using Azure AD Application Proxy to enable the Power BI mobile app to connect to on premises Power BI Report Server is not supported with conditional access policies that require the Microsoft Power BI app as an approved client app.
+- Conditional Access canΓÇÖt consider Microsoft Edge in InPrivate mode an approved client app.
+- Using Azure AD Application Proxy to enable the Power BI mobile app to connect to on premises Power BI Report Server isnΓÇÖt supported with Conditional Access policies that require the Microsoft Power BI app as an approved client app.
See the article, [How to: Require approved client apps for cloud app access with Conditional Access](app-based-conditional-access.md) for configuration examples.
See the article, [How to: Require approved client apps for cloud app access with
In your Conditional Access policy, you can require an [Intune app protection policy](/intune/app-protection-policy) be present on the client app before access is available to the selected cloud apps.
-In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be either the Microsoft Authenticator for iOS, or the Microsoft Company portal for Android devices. If a broker app is not installed on the device when the user attempts to authenticate, the user gets redirected to the app store to install the broker app.
+In order to apply this grant control, Conditional Access requires that the device is registered in Azure Active Directory, which requires the use of a broker app. The broker app can be either the Microsoft Authenticator for iOS, or the Microsoft Company portal for Android devices. If a broker app isnΓÇÖt installed on the device when the user attempts to authenticate, the user gets redirected to the app store to install the broker app.
Applications are required to have the **Intune SDK** with **Policy Assurance** implemented and meet certain other requirements to support this setting. Developers implementing applications with the Intune SDK can find more information in the SDK documentation on these requirements.
The following client apps have been confirmed to support this setting:
- Apps for app protection policy support the Intune mobile application management feature with policy protection. - The **Require app protection policy** requirements: - Only supports the iOS and Android for device platform condition.
- - A broker app is required to register the device. On iOS, the broker app is Microsoft Authenticator and on Android, it is Intune Company Portal app.
+ - A broker app is required to register the device. On iOS, the broker app is Microsoft Authenticator and on Android, itΓÇÖs Intune Company Portal app.
See the article, [How to: Require app protection policy and an approved client app for cloud app access with Conditional Access](app-protection-based-conditional-access.md) for configuration examples.
See the article, [How to: Require app protection policy and an approved client a
When user risk is detected, using the user risk policy conditions, administrators can choose to have the user securely change the password using Azure AD self-service password reset. If user risk is detected, users can perform a self-service password reset to self-remediate, this process will close the user risk event to prevent unnecessary noise for administrators.
-When a user is prompted to change their password, they will first be required to complete multi-factor authentication. YouΓÇÖll want to make sure all of your users have registered for multi-factor authentication, so they are prepared in case risk is detected for their account.
+When a user is prompted to change their password, theyΓÇÖll first be required to complete multi-factor authentication. YouΓÇÖll want to make sure all of your users have registered for multi-factor authentication, so theyΓÇÖre prepared in case risk is detected for their account.
> [!WARNING] > Users must have previously registered for self-service password reset before triggering the user risk policy.
When a user is prompted to change their password, they will first be required to
Restrictions when you configure a policy using the password change control. 1. The policy must be assigned to ΓÇÿall cloud appsΓÇÖ. This requirement prevents an attacker from using a different app to change the userΓÇÖs password and reset account risk, by signing into a different app.
-1. Require password change cannot be used with other controls, like requiring a compliant device.
-1. The password change control can only be used with the user and group assignment condition, cloud app assignment condition (which must be set to all) and user risk conditions.
+1. Require password change canΓÇÖt be used with other controls, like requiring a compliant device.
+1. The password change control can only be used with the user and group assignment condition, cloud app assignment condition (which must be set to all), and user risk conditions.
### Terms of use
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
# Login to a Linux virtual machine in Azure with Azure Active Directory using openSSH certificate-based authentication
-To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform and a certificate authority to SSH into a Linux VM using Azure AD and openSSH certificate-based authentication. This functionality allows organizations to centrally control and enforce Azure role-based access control (RBAC) and Conditional Access policies that manage access to the VMs. This article shows you how to create and configure a Linux VM and login with Azure AD using openSSH certificate-based authentication.
+To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform and a certificate authority to SSH into a Linux VM using Azure AD and openSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control (RBAC) and Conditional Access policies. This article shows you how to create and configure a Linux VM and login with Azure AD using openSSH certificate-based authentication.
> [!IMPORTANT] > This capability is now generally available! [The previous version that made use of device code flow was deprecated August 15, 2021](../../virtual-machines/linux/login-using-aad.md). To migrate from the old version to this version, see the section, [Migration from previous preview](#migration-from-previous-preview). - There are many security benefits of using Azure AD with openSSH certificate-based authentication to log in to Linux VMs in Azure, including: - Use your Azure AD credentials to log in to Azure Linux VMs.
There are many security benefits of using Azure AD with openSSH certificate-base
- Reduce reliance on local administrator accounts, credential theft, and weak credentials. - Password complexity and password lifetime policies configured for Azure AD help secure Linux VMs as well. - With Azure role-based access control, specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.-- With Conditional Access, configure policies to require multi-factor authentication and/or require client device you are using to SSH be a managed device (for example: compliant device or hybrid Azure AD joined) before you can SSH to Linux VMs. -- Use Azure deploy and audit policies to require Azure AD login for Linux VMs and to flag use of non-approved local accounts on the VMs.
+- With Conditional Access, configure policies to require multi-factor authentication and or require client device youΓÇÖre using to SSH be a managed device (for example: compliant device or hybrid Azure AD joined) before you can SSH to Linux VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Linux VMs and flag non-approved local accounts.
- Login to Linux VMs with Azure Active Directory also works for customers that use Federation Services. ## Supported Linux distributions and Azure regions
The following Linux distributions are currently supported during the preview of
| CentOS | CentOS 7, CentOS 8 | | Debian | Debian 9, Debian 10 | | openSUSE | openSUSE Leap 42.3, openSUSE Leap 15.1+ |
-| RedHat Enterprise Linux | RHEL 7.4 to RHEL 7.10, RHEL 8.3+ |
-| SUSE Linux Enterprise Server | SLES 12, SLES 15.1+ |
+| RedHat Enterprise Linux (RHEL) | RHEL 7.4 to RHEL 7.10, RHEL 8.3+ |
+| SUSE Linux Enterprise Server (SLES) | SLES 12, SLES 15.1+ |
| Ubuntu Server | Ubuntu Server 16.04 to Ubuntu Server 20.04 | The following Azure regions are currently supported for this feature:
The following Azure regions are currently supported for this feature:
It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md).
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the CLI locally, you must be running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Requirements for login with Azure AD using openSSH certificate-based authentication
-To enable Azure AD login using SSH certificate-based authentication for your Linux VMs in Azure, you need to ensure the following network, virtual machine, and client (ssh client) requirements are met.
+To enable Azure AD login using SSH certificate-based authentication for Linux VMs in Azure, ensure the following network, virtual machine, and client (ssh client) requirements are met.
### Network
For Azure China 21Vianet
Ensure your VM is configured with the following functionality: -- System assigned managed identity. This option gets automatically selected when you use Azure portal to create VM and select Azure AD login option. You can also enable System assigned managed identity on a new or an existing VM using the Azure CLI.
+- System assigned managed identity. This option gets automatically selected when you use Azure portal to create VM and select Azure AD login option. You can also enable system-assigned managed identity on a new or an existing VM using the Azure CLI.
- `aadsshlogin` and `aadsshlogin-selinux` (as appropriate). These packages get installed with the AADSSHLoginForLinux VM extension. The extension is installed when you use Azure portal to create VM and enable Azure AD login (Management tab) or via the Azure CLI. ### Client
Ensure your VM is configured with the following functionality:
Ensure your client meets the following requirements: - SSH client must support OpenSSH based certificates for authentication. You can use Az CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement. -- SSH extension for Az CLI. You can install this using `az extension add --name ssh`. You do not need to install this extension when using Azure Cloud Shell as it comes pre-installed.-- If you are using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH certificates, you will still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
+- SSH extension for Az CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.
+- If youΓÇÖre using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
- TCP connectivity from the client to either the public or private IP of the VM (ProxyCommand or SSH forwarding to a machine with connectivity also works). > [!IMPORTANT]
To use Azure AD login in for Linux VM in Azure, you need to first enable Azure A
### Using Azure portal create VM experience to enable Azure AD login
-You can enable Azure AD login for any of the supported Linux distributions mentioned above using the Azure portal.
+You can enable Azure AD login for any of the [supported Linux distributions mentioned](#supported-linux-distributions-and-azure-regions) using the Azure portal.
-As an example, to create an Ubuntu Server 18.04 LTS VM in Azure with Azure AD logon:
+As an example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD logon:
1. Sign in to the Azure portal, with an account that has access to create VMs, and select **+ Create a resource**. 1. Click on **Create** under **Ubuntu Server 18.04 LTS** in the **Popular** view. 1. On the **Management** tab, 1. Check the box to enable **Login with Azure Active Directory (Preview)**. 1. Ensure **System assigned managed identity** is checked.
-1. Go through the rest of the experience of creating a virtual machine. During this preview, you will have to create an administrator account with username and password/SSH public key.
+1. Go through the rest of the experience of creating a virtual machine. During this preview, youΓÇÖll have to create an administrator account with username and password or SSH public key.
### Using the Azure Cloud Shell experience to enable Azure AD login
Azure Cloud Shell is a free, interactive shell that you can use to run the steps
- Open Cloud Shell in your browser. - Select the Cloud Shell button on the menu in the upper-right corner of the Azure portal.
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article Install Azure CLI.
+If you choose to install and use the CLI locally, this article requires that youΓÇÖre running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article Install Azure CLI.
1. Create a resource group with [az group create](/cli/azure/group#az_group_create). 1. Create a VM with [az vm create](/cli/azure/vm#az_vm_create&preserve-view=true) using a supported distribution in a supported region. 1. Install the Azure AD login VM extension with [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set).
-The following example deploys a VM named *myVM*, using *Ubuntu 18.04 LTS*, into a resource group named *AzureADLinuxVM*, in the *southcentralus* region. It then installs the *Azure AD login VM extension* to enable Azure AD login for Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
+The following example deploys a VM and then installs the extension to enable Azure AD login for Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
The example can be customized to support your testing requirements as needed.
az vm extension set \
It takes a few minutes to create the VM and supporting resources.
-The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Azure AD authentication. If deploying this extension to a previously created VM, ensure the machine has at least 1 GB of memory allocated else the extension will fail to install.
+The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Azure AD authentication. If deploying this extension to a previously created VM, the VM must have at least 1 GB of memory allocated or the install will fail.
The provisioningState of Succeeded is shown once the extension is successfully installed on the VM. The VM must have a running [VM agent](../../virtual-machines/extensions/agent-linux.md) to install the extension. ## Configure role assignments for the VM
-Now that you have created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
+Now that youΓÇÖve created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
-To allow a user to log in to the VM over SSH, you must assign them either the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
There are multiple ways you can configure role assignments for VM, as an example you can use:
For more information on how to use Azure RBAC to manage access to your Azure sub
## Install SSH extension for Az CLI
-If you are using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
+If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
Run the following command to add SSH extension for Az CLI
az extension show --name ssh
## Using Conditional Access
-You can enforce Conditional Access policies such as require MFA for the user, require compliant/Hybrid Azure AD joined device for the device running SSH client, check for low user and sign-in risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in.
-
-To apply Conditional Access policy, you must select the "Azure Linux VM Sign-In" app from the cloud apps or actions assignment option and then use user and /or sign-in risk as a condition and Access controls as Grant access after satisfying require multi-factor authentication and/or require compliant/Hybrid Azure AD joined device.
+You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
> [!NOTE] > Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
The following example automatically resolves the appropriate IP address for the
az ssh vm -n myVM -g AzureADLinuxVM ```
-If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. You will only be prompted if your az CLI session does not already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and you will be automatically connected to the VM.
+If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your az CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
-You are now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
+YouΓÇÖre now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
### Using Az Cloud Shell
Az Cloud Shell will automatically connect to a session in the context of the sig
az login ```
-Then you can use the normal az ssh vm commands to connect using name and resource group or IP address of the VM.
+Then you can use the normal `az ssh vm` commands to connect using name and resource group or IP address of the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVM
You can then connect to the VM through normal OpenSSH usage. Connection can be d
## Sudo and Azure AD login
-Once, users assigned the VM Administrator role successfully SSH into a Linux VM, they will be able to run sudo with no other interaction or authentication requirement. Users assigned the VM User role will not be able to run sudo.
+Once, users assigned the VM Administrator role successfully SSH into a Linux VM, theyΓÇÖll be able to run sudo with no other interaction or authentication requirement. Users assigned the VM User role wonΓÇÖt be able to run sudo.
## Virtual machine scale set support
az vmss identity assign --name myVMSS --resource-group AzureADLinuxVM
az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLoginForLinux --resource-group AzureADLinuxVM --vmss-name myVMSS ```
-Virtual machine scale sets usually don't have public IP addresses, so you must have connectivity to them from another machine that can reach their Azure virtual network. This example shows how to use the private IP of a virtual machine scale set VM to connect from a machine in the same virtual network.
+Virtual machine scale sets usually don't have public IP addresses. You must have connectivity to them from another machine that can reach their Azure virtual network. This example shows how to use the private IP of a virtual machine scale set VM to connect from a machine in the same virtual network.
```azurecli az ssh vm --ip 10.11.123.456
For customers who are using previous version of Azure AD login for Linux that wa
az vm identity assign -g myResourceGroup -n myVm ```
-1. Install the AADSSHLoginForLinux extension on the VM
+1. Install the AADSSHLoginForLinux extension on the VM.
```azurecli az vm extension set \
For customers who are using previous version of Azure AD login for Linux that wa
## Using Azure Policy to ensure standards and assess compliance
-Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that do not have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that do not have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that donΓÇÖt have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that donΓÇÖt have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot sign-in issues Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
-### Could not retrieve token from local cache
+### CouldnΓÇÖt retrieve token from local cache
You must run az login again and go through an interactive sign in flow. Review the section [Using Az Cloud Shell](#using-az-cloud-shell). ### Access denied: Azure role not assigned
-If you see the following error on your SSH prompt, verify that you have configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role. If you are running into issues with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
+If you see the following error on your SSH prompt, verify that you have configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role. If youΓÇÖre running into issues with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
### Problems deleting the old (AADLoginForLinux) extension
-If the uninstall scripts fail, the extension may get stuck in a transitioning state. When this happens, it can leave packages that it is supposed to uninstall during its removal. In such cases, it is better to manually uninstall the old packages and then try to run az vm extension delete command.
+If the uninstall scripts fail, the extension may get stuck in a transitioning state. When this happens, it can leave packages that itΓÇÖs supposed to uninstall during its removal. In such cases, itΓÇÖs better to manually uninstall the old packages and then try to run az vm extension delete command.
1. Log in as a local user with admin privileges.
-1. Make sure there are no logged in AAD users. Call **who -u** command to see who is logged in; then **sudo kill** `<pid>` for all session processes reported by the previous command.
-1. Run **sudo apt remove --purge aadlogin** (Ubuntu/Debian), **sudo yum erase aadlogin** (RHEL/CentOS) or **sudo zypper remove aadlogin** (OpenSuse/SLES).
+1. Make sure there are no logged in Azure AD users. Call `who -u` command to see who is logged in; then `sudo kill <pid>` for all session processes reported by the previous command.
+1. Run `sudo apt remove --purge aadlogin` (Ubuntu/Debian), `sudo yum erase aadlogin` (RHEL or CentOS), or `sudo zypper remove aadlogin` (OpenSuse or SLES).
1. If the command fails, try the low-level tools with scripts disabled:
- 1. For Ubuntu/Deian run **sudo dpkg --purge aadlogin** . If it is still failing because of the script, delete **/var/lib/dpkg/info/aadlogin.prerm** file and try again.
- 1. For everything else run **rpm -e ΓÇônoscripts aadogin**.
-1. Repeat steps 3-4 for package **aadlogin-selinux**.
+ 1. For Ubuntu/Deian run `sudo dpkg --purge aadlogin`. If itΓÇÖs still failing because of the script, delete `/var/lib/dpkg/info/aadlogin.prerm` file and try again.
+ 1. For everything else run `rpm -e ΓÇônoscripts aadogin`.
+1. Repeat steps 3-4 for package `aadlogin-selinux`.
### Extension Install Errors
Installation of the AADSSHLoginForLinux VM extension to existing computers fails
The Status of the AADSSHLoginForLinux VM extension shows as Transitioning in the portal.
-Cause 1: This failure is due to a System Assigned Managed Identity being required.
+Cause 1: This failure is due to a system-assigned managed identity being required.
Solution 1: Perform these actions: 1. Uninstall the failed extension.
-1. Enable a System Assigned Managed Identity on the Azure VM.
+1. Enable a system-assigned managed identity on the Azure VM.
1. Run the extension install command again. #### Non-zero exit code: 23
Solution 1: Upgrade the Azure CLI client to version 2.21.0 or higher.
After the user has successfully signed in using az login, connection to the VM using `az ssh vm -ip <addres>` or `az ssh vm --name <vm_name> -g <resource_group>` fails with *Connection closed by <ip_address> port 22*.
-Cause 1: The user is not assigned to the either the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
+Cause 1: The user isnΓÇÖt assigned to the either the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
Solution 1: Add the user to the either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
-Cause 2: The user is in a required Azure RBAC role but the System Assigned managed identity has been disabled on the VM.
+Cause 2: The user is in a required Azure RBAC role but the system-assigned managed identity has been disabled on the VM.
Solution 2: Perform these actions:
-1. Enable the System Assigned managed identity on the VM.
+1. Enable the system-assigned managed identity on the VM.
1. Allow several minutes to pass before trying to connect using `az ssh vm --ip <ip_address>`. ### Virtual machine scale set Connection Issues
-Virtual machine scale set VM connections may fail if the virtual machine scale set instances are running an old model. Upgrading virtual machine scale set instances to the latest model may resolve issues, especially if an upgrade has not been done since the Azure AD Login extension was installed. Upgrading an instance applies a standard virtual machine scale set configuration to the individual instance.
+Virtual machine scale set VM connections may fail if the virtual machine scale set instances are running an old model. Upgrading virtual machine scale set instances to the latest model may resolve issues, especially if an upgrade hasnΓÇÖt been done since the Azure AD Login extension was installed. Upgrading an instance applies a standard virtual machine scale set configuration to the individual instance.
## Next steps+
+[What is a device identity?](overview.md)
+[Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Users in this role can read settings and administrative information across Micro
>- [OneDrive admin center](https://admin.onedrive.com/) - OneDrive admin center does not support the Global Reader role >- [Microsoft 365 admin center](https://admin.microsoft.com/Adminportal/Home#/homepage) - Global Reader can't read integrated apps. You won't find the **Integrated apps** tab under **Settings** in the left pane of Microsoft 365 admin center. >- [Office Security & Compliance Center](https://sip.protection.office.com/homepage) - Global Reader can't read SCC audit logs, do content search, or see Secure Score.
->- [Teams admin center](https://admin.teams.microsoft.com) - Global Reader cannot read **Teams lifecycle**, **Analytics & reports**, **IP phone device management** and **App catalog**.
+>- [Teams admin center](https://admin.teams.microsoft.com) - Global Reader cannot read **Teams lifecycle**, **Analytics & reports**, **IP phone device management**, and **App catalog**. For more information, see [Use Microsoft Teams administrator roles to manage Teams](/microsoftteams/using-admin-roles).
>- [Privileged Access Management (PAM)](/office365/securitycompliance/privileged-access-management-overview) doesn't support the Global Reader role. >- [Azure Information Protection](/azure/information-protection/what-is-information-protection) - Global Reader is supported [for central reporting](/azure/information-protection/reports-aip) only, and when your Azure AD organization isn't on the [unified labeling platform](/azure/information-protection/faqs#how-can-i-determine-if-my-tenant-is-on-the-unified-labeling-platform). > - [SharePoint](https://admin.microsoft.com/sharepoint) - Global Reader currently can't access SharePoint using PowerShell.
Identity Protection Center | All permissions of the Security Reader role<br>Addi
[Privileged Identity Management](../privileged-identity-management/pim-configure.md) | All permissions of the Security Reader role<br>**Cannot** manage Azure AD role assignments or settings [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage security policies<br>View, investigate, and respond to security threats<br>View reports Azure Advanced Threat Protection | Monitor and respond to suspicious security activity
-Windows Defender ATP and EDR | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts
+[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts<br/>View machines/device inventory
[Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information<br>Cannot make changes to Intune [Cloud App Security](/cloud-app-security/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions [Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services
Users with this role can manage alerts and have global read-only access on secur
| [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) | All permissions of the Security Reader role<br>Additionally, the ability to perform all Identity Protection Center operations except for resetting passwords and configuring alert e-mails. | | [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | All permissions of the Security Reader role | | [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
-| Windows Defender ATP and EDR | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
+| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
| [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role | | [Cloud App Security](/cloud-app-security/manage-admins) | All permissions of the Security Reader role | | [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services |
In | Can do
Identity Protection Center | Read all security reports and settings information for security features<br><ul><li>Anti-spam<li>Encryption<li>Data loss prevention<li>Anti-malware<li>Advanced threat protection<li>Anti-phishing<li>Mail flow rules [Privileged Identity Management](../privileged-identity-management/pim-configure.md) | Has read-only access to all information surfaced in Azure AD Privileged Identity Management: Policies and reports for Azure AD role assignments and security reviews.<br>**Cannot** sign up for Azure AD Privileged Identity Management or make any changes to it. In the Privileged Identity Management portal or via PowerShell, someone in this role can activate additional roles (for example, Global Administrator or Privileged Role Administrator), if the user is eligible for them. [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports
-Windows Defender ATP and EDR | View and investigate alerts. When you turn on role-based access control in Windows Defender ATP, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Windows Defender ATP role.
+[Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts. When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Microsoft Defender for Endpoint role.
[Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune. [Cloud App Security](/cloud-app-security/manage-admins) | Has read permissions and can manage alerts [Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services
active-directory Palantir Foundry Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/palantir-foundry-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Palantir Foundry** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+1. Select **Upload metadata file**, select the metadata file which you have downloaded in the **[Configure Palantir Foundry SSO](#configure-palantir-foundry-sso)** section, and then select **Add**.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Browse Upload Metadata](common/browse-upload-metadata.png)
-1. In the **Basic SAML Configuration** section, if you see **Service Provider metadata file**, follow these steps:
- 1. Select **Upload metadata file**.
- 1. Select the folder icon to select the metadata file which you have downloaded in **Configure Palantir Foundry SSO** section, and then select **Upload**.
- 1. When the metadata file is successfully uploaded, the values for **Identifier** and **Reply URL** appear automatically in the Palantir Foundry section text box.
+1. When the metadata file is successfully uploaded, the values for **Identifier**, **Reply URL** and **Logout URL** appear automatically in the Palantir Foundry section text box.
> [!Note]
- > If the **Identifier** and **Reply URL** values don't appear automatically, fill in the values manually according to your requirements.
-
-1. If you don't see **Service Provider metadata file** in the **Basic SAML Configuration** section, perform the following steps:
-
- 1. In the **Identifier** text box, type a value using the following pattern: `urn:uuid:<SOME_UUID>`
-
- 1. In the **Reply URL** text box, type a URL using the following pattern: `https://<DOMAIN>/multipass/api/collectors/<SOME_UUID>/saml/SSO`
-
- 1. In the **Logout URL** text box, type a URL using the following pattern: `https://<DOMAIN>/multipass/api/collectors/<SOME_UUID>/SingleLogout`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Logout URL. Contact [Palantir Foundry Client support team](mailto:support@palantir.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > If the **Identifier**, **Reply URL** and **Logout URL** values don't appear automatically, fill in the values manually which can be found in Foundry Control Panel.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up Palantir Foundry** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Palantir Foundry SSO
-1. Log in to the Palantir Foundry as an administrator.
-
-1. In Foundry Control Panel tab, go to the **Authentication** and click **Add SAML provider**.
+1. In Foundry Control Panel, go to the **Authentication** tab and click **Add SAML provider**.
![Screenshot for Add SAML provider.](./media/palantir-foundry-tutorial/saml-provider.png)
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Palantir Foundry you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure Palantir Foundry you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. AKS allows you to create Linux-based node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more details on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140-2][fips].
-FIPS-enabled node pools are currently in preview.
--
-You will need the *aks-preview* Azure CLI extension version *0.5.11* or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
+### Prerequisites
You need the Azure CLI version 2.32.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/export-api-power-platform.md
Once the connector is created, navigate to your [Power Apps](https://make.powera
:::image type="content" source="media/export-api-power-platform/custom-connector-power-app.png" alt-text="Custom connector in Power Platform"::: > [!NOTE]
-> To call the API from the PowerApps test console, you need to add the "https://flow.microsoft.com" URL as an origin to the [CORS policy](api-management-cross-domain-policies.md#CORS) in your API Management instance.
+> To call the API from the Power Apps test console, you need to add the "https://flow.microsoft.com" URL as an origin to the [CORS policy](api-management-cross-domain-policies.md#CORS) in your API Management instance.
## Next steps
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
Title: How to migrate App Service Environment v2 to App Service Environment v3 description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 - Previously updated : 1/17/2022+ Last updated : 1/28/2022
+zone_pivot_groups: app-service-cli-portal
# How to migrate App Service Environment v2 to App Service Environment v3
+An App Service Environment v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+ > [!IMPORTANT]
-> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
>
-An App Service Environment v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
- ## Prerequisites Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
-For the initial preview of the migration feature, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+
+When using the Azure CLI to carry out the migration, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
ASE_RG=<Your-Resource-Group>
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv) ```
-## 2. Delegate your App Service Environment subnet
-
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com).
-
-```azurecli
-az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
-```
-
-![subnet delegation sample](./media/migration/subnet-delegation.jpg)
-
-## 3. Validate migration is supported
+## 2. Validate migration is supported
-The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. For an estimate of when you can migrate, see the [timeline](migrate.md#preview-limitations). If your environment [won't be supported for migration](migrate.md#migration-feature-limitations) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=vali
If there are no errors, your migration is supported and you can continue to the next step.
-## 4. Generate IP addresses for your new App Service Environment v3
+## 3. Generate IP addresses for your new App Service Environment v3
-Run the following command to create the new IPs. This step will take about 5 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
+Run the following command to create the new IPs. This step will take about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
```azurecli
-az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=premigration" --verbose
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=premigration"
``` Run the following command to check the status of this step. ```azurecli
-az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status
``` If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to get your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again. ```azurecli
-az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2018-11-01"
+az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01"
```
-## 5. Update dependent resources with new IPs
+## 4. Update dependent resources with new IPs
-Don't move on to full migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
+Don't move on to migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
-## 6. Full migration
+## 5. Delegate your App Service Environment subnet
-Only start this step once you've completed all pre-migration actions listed above and understand the [implications of full migration](migrate.md#full-migration) including what will happen during this time. There will be about one hour of downtime. Don't scale or make changes to your existing App Service Environment during this step.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com).
```azurecli
-az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --verbose
+az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+```
+
+![subnet delegation sample](./media/migration/subnet-delegation.png)
+
+## 6. Migrate to App Service Environment v3
+
+Only start this step once you've completed all pre-migration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. There will be about one hour of downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration"
``` Run the following command to check the status of your migration. The status will show as "Migrating" while in progress. ```azurecli
-az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status
```
-Once you get a status of "Ready", migration is done and you have an App Service Environment v3.
+Once you get a status of "Ready", migration is done and you have an App Service Environment v3. Your apps will now be running in your new environment.
Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
Get the details of your new environment by running the following command or by n
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG ``` ++
+## 1. Validate migration is supported
+
+From the [Azure portal](https://portal.azure.com), navigate to the **Overview** page for the App Service Environment you'll be migrating. The platform will validate if migration is supported for your App Service Environment. Wait a couple seconds after the page loads for this validation to take place.
+
+If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
+
+![migration access points](./media/migration/portal-overview.png)
+
+![configuration page view](./media/migration/configuration-migration-support.png)
+
+If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+
+The migration page will guide you through the series of steps to complete the migration.
+
+![migration page sample](./media/migration/migration-ux-pre.png)
+
+## 2. Generate IP addresses for your new App Service Environment v3
+
+Under **Generate new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time. If you may see a message a few minutes after starting this step asking you to refresh the page, select refresh as shown in the sample to allow your new IP addresses to appear.
+
+![pre-migration request to refresh](./media/migration/pre-migration-refresh.png)
+
+## 3. Update dependent resources with new IPs
+
+When the previous step finishes, you'll be shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. Don't move on to the next step until you confirm that you have made these updates.
+
+![sample IPs](./media/migration/ip-sample.png)
+
+## 4. Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed.
+
+![ux subnet delegation sample](./media/migration/subnet-delegation-ux.png)
+
+## 5. Migrate to App Service Environment v3
+
+Once you've completed all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what will happen during this time. There will be about one hour of downtime. Scaling and modifications to your existing App Service Environment will be blocked during this step.
+
+When migration is complete, you'll have an App Service Environment v3 and all of your apps will be running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
++ ## Next steps > [!div class="nextstepaction"]
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
Title: Migration to App Service Environment v3
description: Overview of the migration process to App Service Environment v3 Previously updated : 1/17/2022 Last updated : 1/28/2022 # Migration to App Service Environment v3
+App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+ > [!IMPORTANT]
-> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
>
-App Service can now migrate your App Service Environment v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
- ## Supported scenarios At this time, App Service Environment migrations to v3 support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
At this time, App Service Environment migrations to v3 support both [Internal Lo
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
-### Preview limitations
+## Migration feature limitations
-For this version of the preview, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment cannot be migrated to ILB App Service Environment v3 and vice versa.
+With the current version of the migration feature, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported. - Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.-- Deploying your apps with FTP-- Using remote debug with your apps-- Monitoring your traffic with Network Watcher or NSG Flow-- Configuring an IP-based TLS/SSL binding with your apps
+- Deploying your apps with FTP.
+- Using remote debug with your apps.
+- Monitoring your traffic with Network Watcher or NSG Flow.
+- Configuring an IP-based TLS/SSL binding with your apps.
-The following scenarios aren't supported in this version of the preview.
+The following scenarios aren't supported in this version of the feature:
- App Service Environment v2 -> Zone Redundant App Service Environment v3 - App Service Environment v1 - App Service Environment v1 -> Zone Redundant App Service Environment v3-- |ILB App Service Environment v2 with a custom domain suffix
+- ILB App Service Environment v2 with a custom domain suffix
- ILB App Service Environment v1 with a custom domain suffix - Internet facing App Service Environment v2 with IP SSL addresses - Internet facing App Service Environment v1 with IP SSL addresses - [Zone pinned](zone-redundancy.md) App Service Environment v2-- App Service Environment in a region not listed above
+- App Service Environment in a region not listed in the supported regions
-The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time.
+The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category.
-## Overview of the migration process
-
-Migration consists of a series of steps that must be followed in order. Key points are given below for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
-
-> [!NOTE]
-> For this version of the preview, migration must be carried out using Azure REST API calls.
->
+The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you won't be able to migrate until you make the needed updates.
-### Delegate your App Service Environment subnet
+## Overview of the migration process
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+Migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
### Generate IP addresses for your new App Service Environment v3
-The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 5 minutes to complete.
+The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses) addresses. While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 15 minutes to complete.
-When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the full migration step.
+When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the migration step.
### Update dependent resources with new IPs Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and so on, in preparation for the migration. For public internet facing App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
-### Full migration
+### Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+
+### Migrate to App Service Environment v3
-After updating all dependent resources with your new IPs, you should continue with full migration as soon as possible. It's recommended that you move on within one week.
+After updating all dependent resources with your new IPs and properly delegating your subnet, you should continue with migration as soon as possible.
-During full migration, the following events will occur:
+During migration, the following events will occur:
-- The existing App Service Environment is shut down and replaced by the new App Service Environment v3-- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2
+- The existing App Service Environment is shut down and replaced by the new App Service Environment v3.
+- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2.
- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime.
- - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration)
-- The public addresses that are used by the App Service Environment will change to the IPs identified previously
+ - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration).
+- The public addresses that are used by the App Service Environment will change to the IPs identified during the previous step.
As in the IP generation step, you won't be able to scale or modify your App Service Environment or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
As in the IP generation step, you won't be able to scale or modify your App Serv
## Pricing
-There's no cost to migrate your App Service Environment. You'll stop being charged for your previous App Service Environment as soon as it shuts down during the full migration process, and you'll begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
-
-## Migration feature limitations
-
-The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category. Also, you won't be able to migrate if your App Service Environment is in an unhealthy or suspended state.
+There's no cost to migrate your App Service Environment. You'll stop being charged for your previous App Service Environment as soon as it shuts down during the migration process, and you'll begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?** You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md). - **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the full migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
Title: Alternative methods for migrating to App Service Environment v3
description: Migrate to App Service Environment v3 Without Using the Migration Feature Previously updated : 1/17/2022 Last updated : 1/28/2022 # Migrate to App Service Environment v3 without using the migration feature > [!NOTE]
-> The App Service Environment v3 [migration feature](migrate.md) is now available in preview for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
+> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
>
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#preview-limitations). Otherwise, you can choose to use one of the alternative migration options given below.
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#migration-feature-limitations). Otherwise, you can choose to use one of the alternative migration options given in this article.
If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment (15 minutes), create the new App Service Environment v3 (30 minutes), configure any infrastructure and connected resources to work with the new environment (your responsibility), and deploy your apps onto the new environment (application deployment, type, and quantity dependent).
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
App Service Environment v3 uses Isolated v2 App Service plans that are priced an
The [back up](../manage-backup.md) and [restore](../web-sites-restore.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [requirements and restrictions](../manage-backup.md#requirements-and-restrictions) of this feature.
-The step-by-step instructions in the current documentation for [back up](../manage-backup.md) and [restore](../web-sites-restore.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given below.
+The step-by-step instructions in the current documentation for [back up](../manage-backup.md) and [restore](../web-sites-restore.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given in the following screenshot.
![back up and restore sample](./media/migration/back-up-restore-sample.png) |Benefits |Limitations | ||| |Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
-|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. | |Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported | |Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
The step-by-step instructions in the current documentation for [back up](../mana
> Cloning apps is supported on Windows App Service only. >
-This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You'll need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal as described below.
+This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You'll need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal.
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate to your existing App Service and select **Clone App** under **Development Tools**. Fill in the required fields using the details for your new App Service Environment v3.
-1. Select an existing or create a new **Resource Group**
+1. Select an existing or create a new **Resource Group**.
1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You'll need to update any custom DNS or connected resources to point to the new URL.
-1. Use your App Service Environment v3 name for **Region**
-1. Choose whether or not to clone your deployment source
+1. Use your App Service Environment v3 name for **Region**.
+1. Choose whether or not to clone your deployment source.
1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown. 1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 pricing](overview.md#pricing).
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
||| |Can be automated using PowerShell |Only supported on Windows App Service | |Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
-|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
## Manually create your apps on an App Service Environment v3
You can also export templates for multiple resources directly from your resource
The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3: -- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below if creating a new plan
+- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below
```json "type": "Microsoft.Web/serverfarms",
The following initial changes to your Azure Resource Manager templates are requi
- Update App Service plan (serverfarm) parameter the app is to be deployed into to the plan associated with the App Service Environment v3 - Update hosting environment profile (hostingEnvironmentProfile) parameter to the new App Service Environment v3 resource ID-- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all non-required properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the below:
+- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all non-required properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the following sample:
```json "type": "Microsoft.Web/sites",
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessK
To remove a Hybrid Runbook Worker group of Linux machines, you use the same steps as for a Windows hybrid worker group. See [Remove a Hybrid Worker group](automation-windows-hrw-install.md#remove-a-hybrid-worker-group).
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
+
+You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups and Hybrid Workers. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](/azure/role-based-access-control/custom-roles)
+
+**Actions** | **Description**
+ |
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete| Deletes a Hybrid Runbook Worker.
++ ## Next steps * To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
To remove a Hybrid Runbook Worker group, you first need to remove the Hybrid Run
This process can take several seconds to finish. You can track its progress under **Notifications** from the menu.
-
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
+
+You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups and Hybrid Workers. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](/azure/role-based-access-control/custom-roles)
+
+**Actions** | **Description**
+ |
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker.
+
+
## Next steps * To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/extension-based-hybrid-runbook-worker-install.md
Review the parameters used in this template.
|dnsNameForPublicIP| The DNS name for the public IP. |
+## Manage Role permissions for Hybrid Worker Groups
+You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](/azure/role-based-access-control/custom-roles).
+
+**Actions** | **Description**
+ |
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group.
+ ## Next steps * To learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
azure-app-configuration Enable Dynamic Configuration Dotnet Core Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core-push-refresh.md
The App Configuration .NET Core client library supports updating configuration o
1. Poll Model: This is the default behavior that uses polling to detect changes in configuration. Once the cached value of a setting expires, the next call to `TryRefreshAsync` or `RefreshAsync` sends a request to the server to check if the configuration has changed, and pulls the updated configuration if needed.
-1. Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events to Azure Event Grid, the application can use these events to optimize the total number of requests needed to keep the configuration updated. Applications can choose to subscribe to these either directly from Event Grid, or though one of the [supported event handlers](../event-grid/event-handlers.md) such as a webhook, an Azure function or a Service Bus topic.
+1. Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events to Azure Event Grid, the application can use these events to optimize the total number of requests needed to keep the configuration updated. Applications can choose to subscribe to these either directly from Event Grid, or through one of the [supported event handlers](../event-grid/event-handlers.md) such as a webhook, an Azure function, or a Service Bus topic.
-Applications can choose to subscribe to these events either directly from Event Grid, or through a web hook, or by forwarding events to Azure Service Bus. The Azure Service Bus SDK provides an API to register a message handler that simplifies this process for applications that either do not have an HTTP endpoint or do not wish to poll the event grid for changes continuously.
+Applications can choose to subscribe to these events either directly from Event Grid, or through a web hook, or by forwarding events to Azure Service Bus. The Azure Service Bus SDK provides an API to register a message handler that simplifies this process for applications that either don't have an HTTP endpoint or don't wish to poll the event grid for changes continuously.
This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the quickstarts. Before you continue, finish [Create a .NET Core app with App Configuration](./quickstart-dotnet-core-app.md) first. You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms. In this tutorial, you learn how to:- > [!div class="checklist"]
+>
> * Set up a subscription to send configuration change events from App Configuration to a Service Bus topic > * Set up your .NET Core app to update its configuration in response to changes in App Configuration. > * Consume the latest configuration in your application.
To do this tutorial, install the [.NET Core SDK](https://dotnet.microsoft.com/do
## Set up Azure Service Bus topic and subscription
-This tutorial uses the Service Bus integration for Event Grid to simplify the detection of configuration changes for applications that do not wish to poll App Configuration for changes continuously. The Azure Service Bus SDK provides an API to register a message handler that can be used to update configuration when changes are detected in App Configuration. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscription](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md) to create a service bus namespace, topic and subscription.
+This tutorial uses the Service Bus integration for Event Grid to simplify the detection of configuration changes for applications that don't wish to poll App Configuration for changes continuously. The Azure Service Bus SDK provides an API to register a message handler that can be used to update configuration when changes are detected in App Configuration. Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscription](../service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal.md) to create a service bus namespace, topic, and subscription.
Once the resources are created, add the following environment variables. These will be used to register an event handler for configuration changes in the application code.
namespace TestConsole
} ```
-The [SetDirty](/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.iconfigurationrefresher.setdirty) method is used to set the cached value for key-values registered for refresh as dirty. This ensures that the next call to `RefreshAsync` or `TryRefreshAsync` re-validates the cached values with App Configuration and updates them if needed.
+The [SetDirty](/dotnet/api/microsoft.extensions.configuration.azureappconfiguration.iconfigurationrefresher.setdirty) method is used to set the cached value for key-values registered for refresh as dirty. This ensures that the next call to `RefreshAsync` or `TryRefreshAsync` revalidates the cached values with App Configuration and updates them if needed.
A random delay is added before the cached value is marked as dirty to reduce potential throttling in case multiple instances refresh at the same time. The default maximum delay before the cached value is marked as dirty is 30 seconds, but can be overridden by passing an optional `TimeSpan` parameter to the `SetDirty` method.
A random delay is added before the cached value is marked as dirty to reduce pot
## Build and run the app locally
-1. Set an environment variable named **AppConfigurationConnectionString**, and set it to the access key to your App Configuration store. If you use the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
+1. Set an environment variable named **AppConfigurationConnectionString**, and set it to the access key to your App Configuration store.
+
+ ### [Windows command prompt](#tab/windowscommandprompt)
+
+ To build and run the app locally using the Windows command prompt, run the following command and restart the command prompt to allow the change to take effect:
```console
- setx AppConfigurationConnectionString "connection-string-of-your-app-configuration-store"
+ setx AppConfigurationConnectionString "connection-string-of-your-app-configuration-store"
```
+ ### [PowerShell](#tab/powershell)
+ If you use Windows PowerShell, run the following command: ```powershell
- $Env:AppConfigurationConnectionString = "connection-string-of-your-app-configuration-store"
+ $Env:AppConfigurationConnectionString = "connection-string-of-your-app-configuration-store"
```
- If you use macOS or Linux, run the following command:
+ ### [macOS](#tab/unix)
+
+ If you use macOS, run the following command:
```console
- export AppConfigurationConnectionString='connection-string-of-your-app-configuration-store'
+ export AppConfigurationConnectionString='connection-string-of-your-app-configuration-store'
```
+ ### [Linux](#tab/linux)
+
+ If you use Linux, run the following command:
+
+ ```console
+ export AppConfigurationConnectionString='connection-string-of-your-app-configuration-store'
+ ```
+
+
+ 1. Run the following command to build the console app: ```console
- dotnet build
+ dotnet build
``` 1. After the build successfully completes, run the following command to run the app locally: ```console
- dotnet run
+ dotnet run
``` ![Push refresh run before update](./media/dotnet-core-app-pushrefresh-initial.png)
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance.md
az sql mi-arc create --name <name> --resource-group <group> --location <Azure l
Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription a97da202-47ad-4de9-8991-9f7cf689eeb9 --custom-location private-location
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location
```
azure-arc Managed Instance Business Continuity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-business-continuity-overview.md
+
+ Title: Business continuity overview - Azure Arc-enabled SQL Managed Instance
+description: Overview business continuity for Azure Arc-enabled SQL Managed Instance
++++++ Last updated : 01/27/2022+++
+# Overview: Azure Arc-enabled SQL Managed Instance business continuity (preview)
+
+Business continuity is a combination of people, processes, and technology that enables businesses to recover and continue operating in the event of disruptions. In hybrid scenarios there is a joint responsibility between Microsoft and customer, such that customer owns and manages the on-premises infrastructure while the software is provided by Microsoft.
+
+Business continuity for Azure Arc-enabled SQL Managed Instance is available as preview.
++
+## Features
+
+This overview describes the set of capabilities that come built-in with Azure Arc-enabled SQL Managed Instance and how you can leverage them to recover from disruptions.
+
+| Feature | Use case | Service Tier |
+|--|--||
+| Point in time restore | Use the built-in point in time restore (PITR) feature to recover from situations such as data corruptions caused by human errors. Learn more about [Point in time restore](.\point-in-time-restore.md) | Available in both General Purpose and Business Critical service tiers|
+| High availability | Deploy the Azure Arc enabled SQL Managed Instance in high availability mode to achieve local high availability. This mode automatically recovers from scenarios such as hardware failures, pod/node failures, and etc. The built-in listener service automatically redirects new connections to another replica while Kubernetes attempts to rebuild the failed replica. Learn more about [high-availability in Azure Arc-enabled SQL Managed Instance](.\managed-instance-high-availability.md) |This feature is only available in the Business Critical service tier. <br> For General Purpose service tier, Kubernetes provides basic recoverability from scenarios such as node/pod crashes. |
+|Disaster recovery| Configure disaster recovery by setting up another Azure Arc-enabled SQL Managed Instance in a geographically separate data center to synchronize data from the primary data center. This scenario is useful for recovering from events when an entire data center is down due to disruptions such as power outages or other events. | Available in both General Purpose and Business Critical service tiers|
+|
+
+## Next steps
+
+[Learn more about configuring point in time restore](.\point-in-time-restore.md)
+
+[Learn more about configuring high availability in Azure Arc-enabled SQL Managed Instance](.\managed-instance-high-availability.md)
+
+[Learn more about setting up and configuring disaster recovery in Azure Arc-enabled SQL Managed Instance](.\managed-instance-disaster-recovery.md)
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-disaster-recovery.md
+
+ Title: Disaster recovery - Azure Arc-enabled SQL Managed Instance
+description: Describes disaster recovery for Azure Arc-enabled SQL Managed Instance
++++++ Last updated : 01/27/2022+++
+# Azure Arc-enabled SQL Managed Instance - disaster recovery (preview)
+
+Disaster recovery in Azure Arc-enabled SQL Managed Instance is achieved using distributed availability groups.
+
+Disaster recovery features for Azure Arc-enabled SQL Managed Instance are available as preview.
++
+## Background
+
+The distributed availability groups used in Azure Arc-enabled SQL Managed Instance is the same technology that is in SQL Server. Because Azure Arc-enabled SQL Managed Instance runs on Kubernetes, there's no Windows failover cluster involved. For more information, see [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups).
+
+> [!NOTE]
+> - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in.
+> - Distributed availability groups can be setup for either General Purpose or Business Critical service tiers.
+
+To configure disaster recovery:
+
+1. Create custom resource for distributed availability group at the primary site
+1. Create custom resource for distributed availability group at the secondary site
+1. Copy the mirroring certificates
+1. Set up the distributed availability group between the primary and secondary sites
+
+The following image shows a properly configured distributed availability group:
+
+![A properly configured distributed availability group](.\media\business-continuity\dag.png)
+
+### Configure distributed availability groups
+
+1. Provision the managed instance in the primary site.
+
+ ```azurecli
+ az sql mi-arc create --name sqlprimary --tier bc --replicas 3 --k8s-namespace my-namespace --use-k8s
+ ```
+
+2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group.
+
+ ```azurecli
+ az sql mi-arc create --name sqlsecondary --tier bc --replicas 3 --disaster-recovery-site true --k8s-namespace my-namespace --use-k8s
+ ```
+
+3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name <primaryinstance> --cert-file $HOME/sqlcerts/<name>.pemΓÇï --k8s-namespace <namespace> --use-k8s
+ az sql mi-arc get-mirroring-cert --name <secondaryinstance> --cert-file $HOME/sqlcerts/<name>.pem --k8s-namespace <namespace> --use-k8s
+ ```
+
+ Example:
+
+ ```azurecli
+ az sql mi-arc get-mirroring-cert --name sqlprimary --cert-file $HOME/sqlcerts/sqlprimary.pemΓÇï --k8s-namespace my-namespace --use-k8s
+ az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+4. Create the distributed availability group resource on both sites.
+
+ Use `az sql mi-arc dag...` to complete the task. The command seeds system databases in the disaster recovery instance, from the primary instance.
+
+ > [!NOTE]
+ > The distributed availability group name should be identical on both sites.
+
+ ```azurecli
+ az sql mi-arc dag create --dag-name <name of DAG> --name <name for primary DAG resource> --local-instance-name <primary instance name> --role primary --remote-instance-name <secondary instance name> --remote-mirroring-url tcp://<secondary IP> --remote-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
+
+ az sql mi-arc dag create --dag-name <name of DAG> --name <name for secondary DAG resource> --local-instance-name <secondary instance name> --role secondary --remote-instance-name <primary instance name> --remote-mirroring-url tcp://<primary IP> --remote-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
++
+ Example:
+ ```azurecli
+ az sql mi-arc dag create --dag-name dagtest --name dagPrimary --local-instance-name sqlPrimary --role primary --remote-instance-name sqlSecondary --remote-mirroring-url tcp://10.20.5.20:970 --remote-mirroring-cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+
+ az sql mi-arc dag create --dag-name dagtest --name dagSecondary --local-instance-name sqlSecondary --role secondary --remote-instance-name sqlPrimary --remote-mirroring-url tcp://10.20.5.50:970 --remote-mirroring-cert-file $HOME/sqlcerts/sqlprimary.pem --k8s-namespace my-namespace --use-k8s
+ ```
+
+## Manual failover from primary to secondary instance
+
+Use `az sql mi-arc dag...` to initiate a failover from primary to secondary. The following command initiates a failover from the primary instance to the secondary instance. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
+
+```azurecli
+az sql mi-arc dag update --name <name of DAG resource> --role secondary --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
+
+```azurecli
+az sql mi-arc dag update --name dagtest --role secondary --k8s-namespace <namespace> --use-k8s
+```
++
+## Forced failover
+
+In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss.
+
+Run the below command on geo-primary, if available:
+
+```azurecli
+az sql mi-arc dag update -k test --name dagtestp --use-k8s --role force-secondary
+```
+
+On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss.
+
+```azurecli
+az sql mi-arc dag update -k test --name dagtests --use-k8s --role force-primary-allow-data-loss
+```
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-high-availability.md
-# Azure Arc-enabled SQL Managed Instance high availability
+# High Availability with Azure Arc-enabled SQL Managed Instance (preview)
-Azure Arc-enabled SQL Managed Instance is deployed on Kubernetes as a containerized application and uses kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. This service is provided without user intervention ΓÇô all from availability group setup, configuring database mirroring endpoints, to adding databases to the availability group or failover and upgrade coordination. This document explores both types of high availability.
+Azure Arc-enabled SQL Managed Instance is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. Arc-enabled data service provides this service is provided without user intervention. The service sets up the availability group, configures database mirroring endpoints, adds databases to the availability group, and coordinates failover and upgrade. This document explores both types of high availability.
-## Built-in high availability
-Built-in high availability is provided by Kubernetes when remote persistent storage is configured and shared with nodes used by the Arc data service deployment. In this configuration, Kubernetes plays the role of the cluster orchestrator. When the managed instance in a container or the underlying node fails, the orchestrator bootstraps another instance of the container and attaches to the same persistent storage. This type is enabled by default when you deploy Azure Arc-enabled SQL Managed Instance.
+Azure Arc-enabled SQL Managed Instance provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier.
+
+## High availability in General Purpose service tier
+
+In the General Purpose service tier, there is only one replica available, and the high availability is achieved via Kubernetes orchestration. For instance, if a pod or node containing the managed instance container image crashes, then Kubernetes will attempt to stand up another pod or node, and attach to the same persistent storage. During this time, the SQL managed instance is unavailable to the applications. Applications will need to reconnect and retry the transaction when the new pod is up. If `load balancer` is the service type used, then applications can reconnect to the same primary endpoint and Kubernetes will redirect the connection to the new primary. If the service type is `nodeport` then the applications will need to reconnect to the new IP address.
### Verify built-in high availability
-This section, you verify the built-in high availability provided by Kubernetes. When you follow the steps to test out this functionality, you delete the pod of an existing managed instance and verify that Kubernetes recovers from this action.
+To verify the build-in high availability provided by Kubernetes, you can delete the pod of an existing managed instance and verify that Kubernetes recovers from this action by bootstrapping another pod and attaching the persistent storage.
### Prerequisites - Kubernetes cluster must have [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration) - An Azure Arc-enabled SQL Managed Instance deployed with one replica (default) + 1. View the pods. ```console
This section, you verify the built-in high availability provided by Kubernetes.
kubectl get pods -n <namespace of data controller> ```
- For example
+ For example:
```output user@pc:/# kubectl get pods -n arc
This section, you verify the built-in high availability provided by Kubernetes.
After all containers within the pod have recovered, you can connect to the managed instance.
-## Deploy with Always On availability groups
-For increased reliability, you can configure Azure Arc-enabled SQL Managed Instance to deploy with extra replicas in a high availability configuration.
+## High availability in Business Critical service tier
+
+In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, there is a new technology called Contained availability group that provides higher levels of availability. Azure Arc-enabled SQL managed instance deployed with `Business Critical` service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. Contained availability group is built on SQL Server Always On availability groups. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the SQL managed instance that has all the data from the primary and ready to take on connections.
-Capabilities that availability groups enable:
+## Contained availability groups
-- When deployed with multiple replicas, a single availability group named `containedag` is created. By default, `containedag` has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in the Azure Arc-enabled SQL Managed Instance.
+An availability group binds one or more user databases into a logical group so that when there is a failover, the entire group of databases fails over to the secondary replica as a single unit. An availability group only replicates data in the user databases but not the data in system databases such as logins, permissions, or agent jobs. A contained availability group includes metadata from system databases such as `msdb` and `master` databases. When logins are created or modified in the primary replica, they're automatically also created in the secondary replicas. Similarly, when an agent job is created or modified in the primary replica, the secondary replicas also receive those changes.
+
+Azure Arc-enabled SQL Managed Instance takes this concept of contained availability group and adds Kubernetes operator so these can be deployed and managed at scale.
+
+Capabilities that contained availability groups enable:
+
+- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in the Azure Arc-enabled SQL Managed Instance.
- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group. - An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.
-### Deploy
+### Deploy Azure Arc-enabled SQL Managed Instance with multiple replicas using Azure portal
+
+From Azure portal, on the create Azure Arc-enabled SQL Managed Instance page:
+1. Select **Configure Compute + Storage** under Compute + Storage. The portal shows advanced settings.
+2. Under Service tier, select **Business Critical**.
+3. Check the "For development use only", if using for development purposes.
+4. Under High availability, select either **2 replicas** or **3 replicas**.
+
+![High availability settings](.\media\business-continuity\service-tier-replicas.png)
-To deploy a managed instance with availability groups, run the following command.
++
+### Deploy Azure Arc-enabled SQL Managed Instance with multiple replicas using Azure CLI
++
+When an Azure Arc-enabled SQL Managed Instance is deployed in Business Critical service tier, this enables multiple replicas to be created. The setup and configuration of contained availability groups among those instances is automatically done during provisioning.
+
+For instance, the following command creates a managed instance with 3 replicas.
+
+Indirectly connected mode:
+
+```azurecli
+az sql mi-arc create -n <instanceName> --k8s-namespace <namespace> --use-k8s --tier <tier> --replicas <number of replicas>
+```
+Example:
```azurecli
-az sql mi-arc create -n <name of instance> --replicas 3 --k8s-namespace <namespace> --use-k8s
+az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s --tier bc --replicas 3
```
-### Check status
-Once the instance has been deployed, run the following commands to check the status of your instance:
+Directly connected mode:
```azurecli
-az sql mi-arc list --k8s-namespace <namespace> --use-k8s
-az sql mi-arc show -n <name of instance> --k8s-namespace <namespace> --use-k8s
+az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> ΓÇôsubscription <subscription> --custom-location <custom-location> --tier <tier> --replicas <number of replicas>
+```
+Example:
+```azurecli
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier bc --replcias 3
```
-Example output:
+By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance will be synchronously replicated to each of the secondary instances.
+
+## View and monitor availability group status
+
+Once the deployment is complete, connect to the primary endpoint from SQL Server Management Studio.
+
+Verify and retrieve the endpoint of the primary replica, and connect to it from SQL Server Management Studio.
+For instance, if the SQL instance was deployed using `service-type=loadbalancer`, run the below command to retrieve the endpoint to connect to:
+
+```azurecli
+az sql mi-arc list --k8s-namespace my-namespace --use-k8s
+```
+
+or
+```console
+kubectl get sqlmi -A
+```
+
+### Get the primary and secondary endpoints and AG status
+
+Use the `kubectl describe sqlmi` or `az sql mi-arc show` commands to view the primary and secondary endpoints, and availability group status.
+
+Example:
+
+```console
+kubectl describe sqlmi sqldemo -n my-namespace
+```
+or
+
+```azurecli
+az sql mi-arc show sqldemo --k8s-namespace my-namespace --use-k8s
+```
-```output
-user@pc:/# az sql mi-arc list --k8s-namespace <namespace> --use-k8s
-ExternalEndpoint Name Replicas State
- - -
-20.131.31.58,1433 sql2 3/3 Ready
+Example output:
-user@pc:/# az sql mi-arc show -n sql2 --k8s-namespace <namespace> --use-k8s
-{
-...
- "status": {
+```console
+ "status": {
"AGStatus": "Healthy",
- "externalEndpoint": "20.131.31.58,1433",
- "logSearchDashboard": "link to logs dashboard",
- "metricsDashboard": "link to metrics dashboard",
- "readyReplicas": "3/3",
+ "logSearchDashboard": "https://10.120.230.404:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqldemo'))",
+ "metricsDashboard": "https://10.120.230.46:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi1-0",
+ "mirroringEndpoint": "10.15.100.150:5022",
+ "observedGeneration": 1,
+ "primaryEndpoint": "10.15.100.150,1433",
+ "readyReplicas": "2/2",
+ "runningVersion": "v1.2.0_2021-12-15",
+ "secondaryEndpoint": "10.15.100.156,1433",
"state": "Ready" }
-}
```
-Notice the additional number of `Replicas` and the `AGstatus` field indicating the health of the availability group. If all replicas are up and synchronized, then this value is `healthy`.
+You can connect to the above primary endpoint using SQL Server Management Studio and verify using DMVs as:
+
+```tsql
+SELECT * FROM sys.dm_hadr_availability_replica_states
+```
+++
+![Availability Group](.\media\business-continuity\availability-group.png)
+
+And the Contained Availability Dashboard:
+
+![Container Availability Group dashboard](.\media\business-continuity\ag-dashboard.png)
++
+## Failover scenarios
+
+Unlike SQL Server Always On availability groups, the contained availability group is a managed high availability solution. Hence, the failover modes are limited compared to the typical modes available with SQL Server Always On availability groups.
+
+Deploy Business Critical service tier SQL managed instances in either two-replica configuration or three replica configuration. The effects of failures and the subsequent recoverability is different with each configuration. A three replica instance provides a much higher level of availability and recovery, than a two replica instance.
+
+In a two replica configuration, when both the node states are `SYNCHRONIZED`, if the primary replica becomes unavailable, the secondary replica is automatically promoted to primary. When the failed replica becomes available, it will be updated with all the pending changes. If there are connectivity issues between the replicas, then the primary replica may not commit any transactions as every transaction needs to be committed on both replicas before a success is returned back on the primary.
+
+In a three replica configuration, a transaction needs to commit in at least 2 of the 3 replicas before returning a success message back to the application. In the event of a failure, one of the secondaries is automatically promoted to primary while Kubernetes attempts to recover the failed replica. When the replica becomes available it is automatically joined back with the contained availability group and pending changes are synchronized. If there are connectivity issues between the replicas, and more than 2 replicas are out of sync, primary replica will not commit any transactions.
+
+> [!NOTE]
+> It is recommended to deploy a Business Critical SQL Managed Instance in a three replica configuration than a two replica configuration to achieve near-zero data loss.
++
+To fail over from the primary replica to one of the secondaries, for a planned event, run the following command:
+
+If you connect to primary, you can use following T-SQL to fail over the SQL instance to one of the secondaries:
+```code
+ALTER AVAILABILITY GROUP current SET (ROLE = SECONDARY);
+```
++
+If you connect to the secondary, you can use following T-SQL to promote the desired secondary to primary replica.
+```code
+ALTER AVAILABILITY GROUP current SET (ROLE = PRIMARY);
+```
+### Preferred primary replica
+
+You can also set a specific replica to be the primary replica using AZ CLI as follows:
+```azurecli
+az sql mi-arc update --name <sqlinstance name> --k8s-namespace <namespace> --use-k8s --preferred-primary-replica <replica>
+```
+
+Example:
+```azurecli
+az sql mi-arc update --name sqldemo --k8s-namespace my-namespace --use-k8s --preferred-primary-replica sqldemo-3
+```
+
+> [!NOTE]
+> Kubernetes will attempt to set the preferred replica, however it is not guaranteed.
++
+ ## Restoring a database onto a multi-replica instance
-### Restore a database
Additional steps are required to restore a database into an availability group. The following steps demonstrate how to restore a database into a managed instance and add it to an availability group. 1. Expose the primary instance external endpoint by creating a new Kubernetes service.
Additional steps are required to restore a database into an availability group.
```sql SELECT @@SERVERNAME ```
- Create the kubernetes service to the primary instance by running the command below if your kubernetes cluster uses nodePort services. Replace `podName` with the name of the server returned at previous step, `serviceName` with the preferred name for the Kubernetes service created.
+ Create the Kubernetes service to the primary instance by running the command below if your Kubernetes cluster uses nodePort services. Replace `podName` with the name of the server returned at previous step, `serviceName` with the preferred name for the Kubernetes service created.
- ```bash
+ ```console
kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=NodePort ``` For a LoadBalancer service, run the same command, except that the type of the service created is `LoadBalancer`. For example:
- ```bash
+ ```console
kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=LoadBalancer ``` Here is an example of this command run against Azure Kubernetes Service, where the pod hosting the primary is `sql2-0`:
- ```bash
+ ```console
kubectl -n arc-cluster expose pod sql2-0 --port=1533 --name=sql2-0-p --type=LoadBalancer ``` Get the IP of the Kubernetes service created:
- ```bash
+ ```console
kubectl get services -n <namespaceName> ``` 2. Restore the database to the primary instance endpoint.
Additional steps are required to restore a database into an availability group.
> [!IMPORTANT] > As a best practice, you should cleanup by deleting the Kubernetes service created above by running this command: >
->```bash
+>```console
>kubectl delete svc sql2-0-p -n arc >``` ### Limitations
-Azure Arc-enabled SQL Managed Instance availability groups has the same [limitations as Big Data Cluster availability groups. Click here to learn more.](/sql/big-data-cluster/deployment-high-availability#known-limitations)
+Azure Arc-enabled SQL Managed Instance availability groups has the same limitations as Big Data Cluster availability groups. For more information, see [Deploy SQL Server Big Data Cluster with high availability](/sql/big-data-cluster/deployment-high-availability#known-limitations).
## Next steps Learn more about [Features and Capabilities of Azure Arc-enabled SQL Managed Instance](managed-instance-features.md)+
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/release-notes.md
Previously updated : 12/16/2021 Last updated : 01/27/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## January 2022
+
+This release is published January 27, 2022.
+
+### Data controller
+
+- Initiate an upgrade of the data controller from the portal in the direct connected mode
+- Removed block on data controller upgrade if there are Azure Arc-enabled SQL Managed Instance business critical instances that exist
+- Better handling of delete user experiences in Azure portal
+
+### SQL Managed Instance
+
+- Azure Arc-enabled SQL Managed Instance business critical instances can be upgraded from the January release and going forward (preview)
+- Business critical distributed availability group failover can now be done through a Kubernetes-native experience or the Azure CLI (indirect mode only) (preview)
+- Added support for `LicenseType: DisasterRecovery` which will ensure that instances which are used for business critical distributed availability group secondary replicas:
+ - Are not billed for
+ - Automatically seed the system databases from the primary replica when the distributed availability group is created. (preview)
+- New option added to `desiredVersion` called `auto` - automatically upgrades a given SQL instance when there is a new upgrade available (preview)
+- Update the configuration of SQL instances using Azure CLI in the direct connected mode
+ ## December 2021 This release is published December 16, 2021.
This release introduces the following features or capabilities:
- Delete an Azure Arc PostgreSQL Hyperscale from the Azure portal when its Data Controller was configured for Direct connectivity mode. - Deploy Azure Arc-enabled PostgreSQL Hyperscale from the Azure database for Postgres deployment page in the Azure portal. See [Select Azure Database for PostgreSQL deployment option - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.PostgreSQLServer).-- Specify storage classes and Postgres extensions when deploying Azure Arc-enabled PostgreSQL Hyperscale from the Azure portal.
+- Specify storage classes and PostgreSQL extensions when deploying Azure Arc-enabled PostgreSQL Hyperscale from the Azure portal.
- Reduce the number of worker nodes in your Azure Arc-enabled PostgreSQL Hyperscale. You can do this operation (known as scale in as opposed to scale out when you increase the number of worker nodes) from `azdata` command-line. #### Azure Arc-enabled SQL Managed Instance
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+### January 27, 2022
+
+|Component |Value |
+|--||
+|Container images tag |v1.3.0_2022-01-27
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1|
+|ARM API version|2021-11-01|
+|`arcdata` Azure CLI extension version| 1.2.0|
+|Arc enabled Kubernetes helm chart extension version|1.1.18501004|
+|Arc Data extension for Azure Data Studio|1.0|
+ ### December 16, 2021 The following table describes the components in this release.
The following table describes the components in this release.
|Component |Value | |--|| |Container images tag | v1.2.0_2021-12-15 |
-|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v2beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1 |
+|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1 |
|ARM API version | 2021-11-01 | |`arcdata` Azure CLI extension version | 1.1.2 | |Arc enabled Kubernetes helm chart extension version | 1.1.18031001 |
The following table describes the components in this release.
|Component |Value | |--|| |Container images tag | v1.1.0_2021-11-02 |
-|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v2beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2 |
+|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2 |
|ARM API version | 2021-11-01 | |`arcdata` Azure CLI extension version | 1.1.0, (Nov 3),</br>1.1.1 (Nov4) | |Arc enabled Kubernetes helm chart extension version | 1.0.17551005 - Required if upgrade from GA <br/><br/> 1.1.17561007 - GA+1/Nov release chart |
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.19.5 |
+| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.21.3 |
+| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 |
+| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Arc-enabled servers support moving machines with one or more VM extensions insta
|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent |[Log Analytics VM extension for Windows](../../virtual-machines/extensions/oms-windows.md)| |Azure Monitor for VMs (insights) |Microsoft.Azure.Monitoring.DependencyAgent |DependencyAgentWindows | [Dependency agent virtual machine extension for Windows](../../virtual-machines/extensions/agent-dependency-windows.md)| |Azure Key Vault Certificate Sync | Microsoft.Azure.Key.Vault |KeyVaultForWindows | [Key Vault virtual machine extension for Windows](../../virtual-machines/extensions/key-vault-windows.md) |
-|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorWindowsAgent |[Install the Azure Monitor agent (preview)](../../azure-monitor/agents/azure-monitor-agent-install.md) |
+|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorWindowsAgent |[Install the Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-manage.md) |
|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute |HybridWorkerForWindows |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally. | ### Linux extensions
Arc-enabled servers support moving machines with one or more VM extensions insta
|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux |[Log Analytics VM extension for Linux](../../virtual-machines/extensions/oms-linux.md) | |Azure Monitor for VMs (insights) |Microsoft.Azure.Monitoring.DependencyAgent |DependencyAgentLinux |[Dependency agent virtual machine extension for Linux](../../virtual-machines/extensions/agent-dependency-linux.md) | |Azure Key Vault Certificate Sync | Microsoft.Azure.Key.Vault |KeyVaultForLinux | [Key Vault virtual machine extension for Linux](../../virtual-machines/extensions/key-vault-linux.md) |
-|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorLinuxAgent |[Install the Azure Monitor agent (preview)](../../azure-monitor/agents/azure-monitor-agent-install.md) |
+|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorLinuxAgent |[Install the Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-manage.md) |
|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute |HybridWorkerForLinux |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally.| ## Prerequisites
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-high-availability.md
description: Learn about Azure Cache for Redis high availability features and op
Previously updated : 02/08/2021 Last updated : 01/26/2022
Azure Cache for Redis implements high availability by using multiple VMs, called
| Option | Description | Availability | Standard | Premium | Enterprise | | - | - | - | :: | :: | :: | | [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|-|
-| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | Up to 99.99% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
+| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.9% in Premium; 99.99% in Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Up to 99.999% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Preview| ## Standard replication
azure-cache-for-redis Cache How To Redis Cli Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-redis-cli-tool.md
# Use the Redis command-line tool with Azure Cache for Redis
-*redis-cli.exe* is a popular command-line tool for interacting with an Azure Cache for Redis as a client. This tool is also available for use with Azure Cache for Redis.
-
-The tool is available for Windows platforms by downloading the [Redis command-line tools for Windows](https://github.com/MSOpenTech/redis/releases/).
+Use the popular `redis-cli.exe` command-line tool to interact with an Azure Cache for Redis as a client. The tool is available for Windows platforms by downloading the [Redis command-line tools for Windows](https://github.com/MSOpenTech/redis/releases/).
If you want to run the command-line tool on another platform, download open-source Redis from [https://redis.io/download](https://redis.io/download).
In this section, you retrieve the keys from the Azure portal.
[!INCLUDE [redis-cache-create](includes/redis-cache-access-keys.md)] - ## Enable access for redis-cli.exe With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The `redis-cli.exe` command-line tool doesn't support TLS. You have two configuration choices to use it:
With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The
On the *stunnel* Log Window menu, select **Configuration** > **Edit Configuration** to open the current configuration file.
- Add the following entry for *redis-cli.exe* under the **Service definitions** section. Insert your actual cache name in place of `yourcachename`.
+ Add the following entry for `redis-cli.exe` under the **Service definitions** section. Insert your actual cache name in place of `yourcachename`.
- ```
+ ```properties
[redis-cli] client = yes accept = 127.0.0.1:6380 connect = yourcachename.redis.cache.windows.net:6380 ```
- Save and close the configuration file.
+ Save and close the configuration file.
On the stunnel Log Window menu, select **Configuration** > **Reload Configuration**. - ## Connect using the Redis command-line tool.
-When using *stunnel*, run *redis-cli.exe*, and pass only your *port*, and *access key* (primary or secondary) to connect to the cache.
+When using *stunnel*, run `redis-cli.exe`, and pass only your *port*, and *access key* (primary or secondary) to connect to the cache.
-```
+```console
redis-cli.exe -p 6380 -a YourAccessKey ```
redis-cli.exe -p 6380 -a YourAccessKey
If you're using a test cache with the **unsecure** non-TLS port, run `redis-cli.exe` and pass your *host name*, *port*, and *access key* (primary or secondary) to connect to the test cache.
-```
+```console
redis-cli.exe -h yourcachename.redis.cache.windows.net -p 6379 -a YourAccessKey ``` ![stunnel with redis-cli](media/cache-how-to-redis-cli-tool/cache-redis-cli-non-ssl.png) --- ## Next steps Learn more about using the [Redis Console](cache-configure.md#redis-console) to issue commands.
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-bindings.md
module.exports = df.orchestrator(function*(context) {
import azure.durable_functions as df def orchestrator_function(context: df.DurableOrchestrationContext):
- input_ = context.get_input()
+ input = context.get_input()
# Do some work
- return f"Hello {name}!"
+ return f"Hello {input['name']}!"
main = df.Orchestrator.create(orchestrator_function) ```
module.exports = df.orchestrator(function*(context) {
import azure.durable_functions as df def orchestrator_function(context: df.DurableOrchestrationContext):
- input_ = context.get_input()
- result = yield context.call_activity('SayHello', name)
+ input = context.get_input()
+ result = yield context.call_activity('SayHello', input['name'])
return result main = df.Orchestrator.create(orchestrator_function)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
The configuration is specific to Python function apps. It defines the prioritiza
|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. | ## PYTHON_ENABLE_DEBUG_LOGGING
-Enables debug-level logging in a Python function app. A value of '1' enables debug-level logging. Without this setting, only information and higher level logs are sent from the Python worker to the Functions host. Use this setting when debugging or tracing your Python function executions.
-
-|Key|Sample value|
-|||
-|PYTHON_ENABLE_DEBUG_LOGGING|`1`|
+Enables debug-level logging in a Python function app. A value of `1` enables debug-level logging. Without this setting or with a value of `0`, only information and higher level logs are sent from the Python worker to the Functions host. Use this setting when debugging or tracing your Python function executions.
When debugging Python functions, make sure to also set a debug or trace [logging level](functions-host-json.md#logging) in the host.json file, as needed. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-azure-sql.md
Title: Azure SQL bindings for Functions
description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 12/15/2021 Last updated : 1/25/2022 ms.devlang: csharp
This set of articles explains how to work with [Azure SQL](../azure-sql/index.ym
### Functions
-Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+Working with the trigger and bindings requires you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
| Language | Add by... | Remarks |-||-|
Working with the trigger and bindings requires that you reference the appropriat
## Known issues -- Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` are not supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and are not compatible with the `OPENJSON` function used by this Azure Functions binding.-- Case-sensitive [collations](/sql/relational-databases/collations/collation-and-unicode-support#Collation_Defn) are not currently supported. [GitHub item #133](https://github.com/Azure/azure-functions-sql-extension/issues/133) tracks progress on this issue.
+- Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` aren't supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and aren't compatible with the `OPENJSON` function used by this Azure Functions binding.
## Open source
-The Azure SQL bindings for Azure Functions are open source and available on the repository at [https://github.com/Azure/azure-functions-sql-extension](https://github.com/Azure/azure-functions-sql-extension).
+The Azure SQL bindings for Azure Functions are open-source and available on the repository at [https://github.com/Azure/azure-functions-sql-extension](https://github.com/Azure/azure-functions-sql-extension).
## Next steps
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 01/25/2022 Last updated : 01/26/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
-| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
+| [Multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | | [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | | [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | &#x2705; | &#x2705; |
-| [Video Analyzer for Media (formerly Video Indexer)](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) | &#x2705; | &#x2705; |
+| [Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) (formerly Video Indexer) | &#x2705; | &#x2705; |
| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | &#x2705; | &#x2705; | | [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | |
| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Microsoft Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-developer-guide.md
Title: Azure Government developer guide | Microsoft Docs
-description: This article compares features and provides guidance on developing applications for Azure Government.
-
-cloud: gov
--
+ Title: Azure Government developer guide
+description: Provides guidance on developing applications for Azure Government
Previously updated : 8/04/2021++
+recommendations: false
Last updated : 01/26/2022 # Azure Government developer guide
Azure Government is a separate instance of the Microsoft Azure service. It addre
Microsoft provides various tools to help you create and deploy cloud applications on global Azure and Azure Government.
-When you create and deploy applications to Azure Government services, as opposed to global Azure, you need to know the key differences between the two cloud environments. The specific areas to understand are:
+When you create and deploy applications on Azure Government, you need to know the key differences between Azure Government and global Azure. The specific areas to understand are:
- Setting up and configuring your programming environment - Configuring endpoints
The information in this document summarizes the differences between the two clou
- [Azure Government](https://azure.microsoft.com/global-infrastructure/government/) site - [Microsoft Trust Center](https://www.microsoft.com/trust-center/product-overview)-- [Azure Documentation Center](../index.yml)
+- [Azure documentation center](../index.yml)
- [Azure Blogs](https://azure.microsoft.com/blog/)
-This content is intended for partners and developers who are deploying to Azure Government.
+This content is intended for Microsoft partners and developers who are deploying to Azure Government.
## Guidance for developers
Most of the currently available technical content assumes that applications are
- Certain services and features that are in specific regions of global Azure might not be available in Azure Government. - Feature configurations in Azure Government might differ from those in global Azure.
-Therefore, it's important to review your sample code, configurations, and steps to ensure that you are building and executing within the Azure Government cloud services environment.
+Therefore, it's important to review your sample code and configurations to ensure that you are building within the Azure Government cloud services environment.
+
+### Endpoint mapping
+
+Service endpoints in Azure Government are different than in Azure. For a mapping between Azure and Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
-For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+### Feature variations
+
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#service-availability).
### Quickstarts
-Navigate through the links below to get started using Azure Government:
+Navigate through the following links to get started using Azure Government:
- [Login to Azure Government portal](./documentation-government-get-started-connect-with-portal.md) - [Connect with PowerShell](./documentation-government-get-started-connect-with-ps.md)
The [Azure Government video library](https://aka.ms/AzureGovVideos) contains man
## Compliance
-For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md).
-
-### Azure Blueprints
-
-[Azure Blueprints](../governance/blueprints/overview.md) is a service that helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies. Resources provisioned through Azure Blueprints adhere to an organizationΓÇÖs standards, patterns, and compliance requirements. The overarching goal of Azure Blueprints is to help automate compliance and cybersecurity risk management in cloud environments. To help you deploy a core set of policies for any Azure-based architecture that requires compliance with certain US government compliance requirements, see [Azure Blueprints samples](../governance/blueprints/samples/index.md).
-
-## Endpoint mapping
-
-Service endpoints in Azure Government are different than in Azure. For a mapping between Azure and Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
+For more information about Azure Government compliance assurances, see [Azure Government compliance](./documentation-government-plan-compliance.md) documentation.
## Next steps
For more information about Azure Government, see the following resources:
- [Sign up for a trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial) - [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/) - [Ask questions via the azure-gov tag in StackOverflow](https://stackoverflow.com/tags/azure-gov)-- [Azure Government Overview](./documentation-government-welcome.md)-- [Azure Government Blog](https://blogs.msdn.microsoft.com/azuregov/)-- [Azure Compliance](../compliance/index.yml)
+- [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure compliance](../compliance/index.yml)
azure-government Documentation Government Plan Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-plan-compliance.md
Title: Azure Government Compliance | Microsoft Docs
-description: Provides and overview of the available compliance services for Azure Government
-
-cloud: gov
-
+ Title: Azure Government compliance
+description: Provides an overview of the available compliance assurances for Azure Government
Previously updated : 01/20/2020-+++
+recommendations: false
Last updated : 01/26/2022 + # Azure Government compliance
-## Azure Blueprints
+Microsoft Azure Government meets demanding US government compliance requirements that mandate formal assessments and authorizations, including:
-[Azure Blueprints](https://azure.microsoft.com/services/blueprints/) can help you automate the process of achieving compliance on Azure Government. FedRAMP and other [standards-based blueprint samples](../governance/blueprints/samples/index.md) are available in Azure Blueprints.
+- [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP)
+- Department of Defense (DoD) Cloud Computing [Security Requirements Guide](https://public.cyber.mil/dccs/dccs-documents/) (SRG) Impact Level (IL) 2, 4, and 5
-## General Data Protection Regulation (GDPR) Data Subject Requests (DSRs) on Azure Government
+Azure Government maintains the following authorizations that pertain to Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia:
-Azure tenant administrators can use the [User Privacy blade](https://portal.azure.us/#blade/Microsoft_Azure_Policy/UserPrivacyMenuBlade/Overview) in the Azure portal to export and/or delete personal data generated during a customer's use of Azure Government services. For more information about Data Subject Requests, see [Data Subject Requests for the GDPR](/microsoft-365/compliance/gdpr-dsr-azure).
+- [FedRAMP High](/azure/compliance/offerings/offering-fedramp) Provisional Authorization to Operate (P-ATO) issued by the FedRAMP Joint Authorization Board (JAB)
+- [DoD IL2](/azure/compliance/offerings/offering-dod-il2) Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA)
+- [DoD IL4](/azure/compliance/offerings/offering-dod-il4) PA issued by DISA
+- [DoD IL5](/azure/compliance/offerings/offering-dod-il5) PA issued by DISA
-## Next steps
+For links to additional Azure Government compliance assurances, see [Azure compliance](../compliance/index.yml). For example, Azure Government can help you meet your compliance obligations with many US government requirements, including:
+
+- [Criminal Justice Information Services (CJIS)](/azure/compliance/offerings/offering-cjis)
+- [Internal Revenue Service (IRS) Publication 1075](/azure/compliance/offerings/offering-irs-1075)
+- [Defense Federal Acquisition Regulation Supplement (DFARS)](/azure/compliance/offerings/offering-dfars)
+- [International Traffic in Arms Regulations (ITAR)](/azure/compliance/offerings/offering-itar)
+- [Export Administration Regulations (EAR)](/azure/compliance/offerings/offering-ear)
+- [Federal Information Processing Standard (FIPS) 140](/azure/compliance/offerings/offering-fips-140-2)
+- [National Institute of Standards and Technology (NIST) 800-171](/azure/compliance/offerings/offering-nist-800-171)
+- [National Defense Authorization Act (NDAA) Section 889 and Section 1634](/azure/compliance/offerings/offering-ndaa-section-889)
+- [North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards](/azure/compliance/offerings/offering-nerc)
+- [Health Insurance Portability and Accountability Act of 1996 (HIPAA)](/azure/compliance/offerings/offering-hipaa-us)
+- [Electronic Prescriptions for Controlled Substances (EPCS)](/azure/compliance/offerings/offering-epcs-us)
+- And many more US government, global, and industry standards
+
+For current Azure Government regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
+
+> [!NOTE]
+>
+> - Some Azure services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md).**
+> - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](./documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
+
+## Audit documentation
+
+You can access Azure and Azure Government audit reports and related documentation via the [Service Trust Portal](https://servicetrust.microsoft.com) (STP) in the following sections:
+
+- STP [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3), which has a subsection for FedRAMP Reports.
+- STP [Data Protection Resources](https://servicetrust.microsoft.com/ViewPage/TrustDocumentsV3), which is further divided into Compliance Guides, FAQ and White Papers, and Pen Test and Security Assessments subsections.
-Learn about [Azure Blueprints](https://azure.microsoft.com/services/blueprints/)
+You must sign in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](https://aka.ms/stphelp).
+
+Alternatively, you can access certain audit reports and certificates in the Azure or Azure Government portal by navigating to *Home > Security Center > Regulatory compliance > Audit reports* or using direct links based on your subscription (sign in required):
+
+- Azure portal [audit reports blade](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade)
+- Azure Government portal [audit reports blade](https://portal.azure.us/#blade/Microsoft_Azure_Security/AuditReportsBlade)
+
+You must have an existing subscription or free trial account in [Azure](https://azure.microsoft.com/free/) or [Azure Government](https://azure.microsoft.com/global-infrastructure/government/request/) to download audit documents.
+
+## Azure Policy regulatory compliance built-in initiatives
+
+For additional customer assistance, Microsoft provides the Azure Policy regulatory compliance built-in initiatives, which map to **compliance domains** and **controls** in key US government standards:
+
+- [FedRAMP High](../governance/policy/samples/gov-fedramp-high.md)
+- [DoD IL4](../governance/policy/samples/gov-dod-impact-level-4.md)
+- [DoD IL5](../governance/policy/samples/gov-dod-impact-level-5.md)
+
+For additional regulatory compliance built-in initiatives that pertain to Azure Government, see [Azure Policy samples](../governance/policy/samples/index.md).
+
+Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility - customer, Microsoft, or shared. For Microsoft-responsible controls, we provide additional audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](/azure/governance/policy/how-to/get-compliance-data) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
+
+## Next steps
-Visit the [Microsoft Azure Government Blog](https://devblogs.microsoft.com/azuregov/)
+- [Azure compliance](../compliance/index.yml)
+- [Azure Policy overview](../governance/policy/overview.md)
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Connect with Azure Government portal](./documentation-government-get-started-connect-with-portal.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-welcome.md
recommendations: false Previously updated : 01/25/2022 Last updated : 01/26/2022 # What is Azure Government?
-US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class [security and compliance](../compliance/index.yml). Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services handle data that is subject to various [US government regulations and requirements](/azure/compliance/offerings/offering-cjis). To provide you with the highest level of security and compliance, Azure Government uses physically isolated datacenters and networks located in the US only.
+US government agencies or their partners interested in cloud services that meet government security and compliance requirements, can be confident that [Microsoft Azure Government](https://azure.microsoft.com/global-infrastructure/government/) provides world-class security and compliance. Azure Government delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud. Azure Government services can accommodate data that is subject to various [US government regulations and requirements](../compliance/index.yml).
+
+To provide you with the highest level of security and compliance, Azure Government uses physically isolated datacenters and networks located in the US only. Compared to Azure global, Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the US and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening).
Azure Government customers (US federal, state, and local government or their partners) are subject to validation of eligibility. If there's a question about eligibility for Azure Government, you should consult your account team. To sign up for trial, request your [trial subscription](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial).
To start using Azure Government, first check out [Guidance for developers](./doc
- [Acquiring and accessing Azure Government](https://azure.microsoft.com/offers/azure-government/) - [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
- View [YouTube videos](https://www.youtube.com/playlist?list=PLLasX02E8BPA5IgCPjqWms5ne5h4briK7)
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage
description: Learn about Microsoft Azure Maps Weather services coverage Previously updated : 01/18/2021 Last updated : 01/26/2022
# Azure Maps Weather services coverage
-This article provides coverage information for Azure Maps [Weather services](/rest/api/maps/weather). Azure Maps Weather data services returns details such as radar tiles, current weather conditions, weather forecasts, and the weather along a route.
+This article provides coverage information for Azure Maps [Weather services](/rest/api/maps/weather). Azure Maps Weather data services returns details such as radar tiles, current weather conditions, weather forecasts, the weather along a route, air quality, historical weather and tropical storms info.
Azure Maps doesn't have the same level of information and accuracy for all countries and regions.
-The following table provides information about what kind of weather information you can request from each country/region.
+The following table refers to the *Other* column and provides a list containing the weather information you can request from that country/region.
-| Symbol | Meaning |
-|--||
-|* |Covers Current Conditions, Hourly Forecast, Quarter-day Forecast, Daily Forecast, Weather Along Route and Daily Indices. |
+| Symbol | Meaning |
+|:-:|--|
+| * |Refers to coverage of the following features: Air Quality, Current Conditions, Daily Forecast, Daily Indices, Historical Weather, Hourly Forecast, Quarter-day Forecast, Tropical Storms and Weather Along Route. |
## Americas
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
Consider the following when using the Dependency agent:
## Virtual machine extensions
-The [Azure Monitor agent](./azure-monitor-agent-install.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
+The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
On hybrid machines, use [Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics and Azure Monitor Dependency VM extensions.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-manage.md
+
+ Title: Manage the Azure Monitor agent
+description: Options for managing the Azure Monitor agent (AMA) on Azure virtual machines and Azure Arc-enabled servers.
+++ Last updated : 01/27/2022++++
+# Manage the Azure Monitor agent
+This article provides the different options currently available to install, uninstall and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect.
+
+## Virtual machine extension details
+The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
+
+| Property | Windows | Linux |
+|:|:|:|
+| Publisher | Microsoft.Azure.Monitor | Microsoft.Azure.Monitor |
+| Type | AzureMonitorWindowsAgent | AzureMonitorLinuxAgent |
+| TypeHandlerVersion | 1.0 | 1.5 |
+
+## Extension versions
+WE strongly recommended to update to generally available versions listed as follows instead of using preview or intermediate versions.
+
+| Release Date | Release notes | Windows | Linux |
+|:|:|:|:|:|
+| June 2021 | General availability announced. <ul><li>All features except metrics destination now generally available</li><li>Production quality, security and compliance</li><li>Availability in all public regions</li><li>Performance and scale improvements for higher EPS</li></ul> [Learn more](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-now-generally-available/) | 1.0.12.0 | 1.9.1.0 |
+| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 |
+| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> |
+| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
+| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0<sup>3</sup> |
+
+<sup>1</sup> Do not use AMA Linux version 1.10.7.0
+<sup>2</sup> Known regression where it's not working on Arc-enabled servers
+<sup>3</sup> Bug identified wherein Linux performance counters data stops flowing on restarting/rebooting the machine(s). Fix underway and will be available in next monthly version update.
++
+## Prerequisites
+The following prerequisites must be met prior to installing the Azure Monitor agent.
+
+- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md#installation-and-configuration) first (at no added cost)
+- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).
+- The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine.
+- The virtual machine must have access to the following HTTPS endpoints:
+ - *.ods.opinsights.azure.com
+ - *.ingest.monitor.azure.com
+ - *.control.monitor.azure.com
++
+## Using the Azure portal
+
+### Install
+To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
+
+### Uninstall
+To uninstall the Azure Monitor agent using the Azure portal, navigate to your virtual machine, scale set or Arc-enabled server, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Uninstall**.
+
+### Update
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
++
+## Using Resource Manager template
+
+### Install
+You can use Resource Manager templates to install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers and to create an association with data collection rules. You must create any data collection rule prior to creating the association.
+
+Get sample templates for installing the agent and creating the association from the following:
+
+- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
+- [Template to create association with data collection rule](./resource-manager-data-collection-rules.md)
+
+Install the templates using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md) such as the following commands.
+
+# [PowerShell](#tab/ARMAgentPowerShell)
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>"
+```
+# [CLI](#tab/ARMAgentCLI)
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>"
+```
++
+## Using PowerShell
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the PowerShell command for adding a virtual machine extension.
+
+### Install on Azure virtual machines
+Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines.
+# [Windows](#tab/PowerShellWindows)
+```powershell
+Set-AzVMExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number>
+```
+# [Linux](#tab/PowerShellLinux)
+```powershell
+Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number>
+```
++
+### Uninstall on Azure virtual machines
+Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines.
+# [Windows](#tab/PowerShellWindows)
+```powershell
+Remove-AzVMExtension -Name AMAWindows -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
+```
+# [Linux](#tab/PowerShellLinux)
+```powershell
+Remove-AzVMExtension -Name AMALinux -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
+```
++
+### Update on Azure virtual machines
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+
+### Install on Azure Arc-enabled servers
+Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+New-AzConnectedMachineExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+New-AzConnectedMachineExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
+```
++
+### Uninstall on Azure Arc-enabled servers
+Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AMAWindows
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AMALinux
+```
++
+### Upgrade on Azure Arc-enabled servers
+To perform a **one time** upgrade of the agent, use the following PowerShell commands:
+
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+$target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}}
+Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+$target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}}
+Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+```
++
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enabling-automatic-extension-upgrade-preview) feature, using the following PowerShell commands.
+# [Windows](#tab/PowerShellWindowsArc)
+```powershell
+Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AMAWindows -EnableAutomaticUpgrade
+```
+# [Linux](#tab/PowerShellLinuxArc)
+```powershell
+Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AMALinux -EnableAutomaticUpgrade
+```
+++
+## Using Azure CLI
+You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers using the Azure CLI command for adding a virtual machine extension.
+
+### Install on Azure virtual machines
+Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines.
+# [Windows](#tab/CLIWindows)
+```azurecli
+az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+```
+# [Linux](#tab/CLILinux)
+```azurecli
+az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+```
++
+### Uninstall on Azure virtual machines
+Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines.
+# [Windows](#tab/CLIWindows)
+```azurecli
+az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent
+```
+# [Linux](#tab/CLILinux)
+```azurecli
+az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent
+```
++
+### Update on Azure virtual machines
+To perform a **one time update** of the agent, you must first uninstall the existing agent version and then install the new version as described above.
+
+### Install on Azure Arc-enabled servers
+Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
+```
++
+### Uninstall on Azure Arc-enabled servers
+Use the following CLI commands to install the Azure Monitor agent onAzure Azure Arc-enabled servers.
+
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
++
+### Upgrade on Azure Arc-enabled servers
+To perform a **one time upgrade** of the agent, use the following CLI commands:
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+```
++
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enabling-automatic-extension-upgrade-preview) feature, using the following PowerShell commands.
+# [Windows](#tab/CLIWindowsArc)
+```azurecli
+az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
+# [Linux](#tab/CLILinuxArc)
+```azurecli
+az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+```
+++
+## Using Azure Policy
+Use the following policies and policy initiatives to automatically install the agent and associate it with a data collection rule, every time you create a virtual machine.
+
+### Built-in policy initiatives
+[View prerequisites for agent installation](azure-monitor-agent-manage.md#prerequisites).
+
+Policy initiatives for Windows and Linux virtual machines consist of individual policies that:
+
+- Install the Azure Monitor agent extension on the virtual machine.
+- Create and deploy the association to link the virtual machine to a data collection rule.
+
+![Partial screenshot from the Azure Policy Definitions page showing two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+
+### Built-in policies
+You can choose to use the individual policies from their respective policy initiatives, based on your needs. For example, if you only want to automatically install the agent, use the first policy from the initiative as shown in the following example.
+
+![Partial screenshot from the Azure Policy Definitions page showing policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
+
+### Remediation
+The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to *existing resources*, so you can configure the Azure Monitor agent for any resources that were already created.
+
+When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.
+
+![Screenshot that shows initiative remediation for the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png)
+
+## Next steps
+
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Title: Migrate from legacy agents to the new Azure Monitor agent
-description: Guidance for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
+description: This article provides guidance for migrating from the existing legacy agents to the new Azure Monitor agent (AMA) and data collection rules (DCR).
This article provides high-level guidance on when and how to migrate to the new
- Destinations, such as Log Analytics workspaces 1. [Create new data collection rules](/rest/api/monitor/datacollectionrules/create#examples) by using the preceding configuration. As a best practice, you might want to have a separate data collection rule for Windows versus Linux sources. Or you might want separate data collection rules for individual teams with different data collection needs. 1. [Enable system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md#system-assigned-managed-identity) on target resources.
-2. Install the Azure Monitor agent extension. Deploy data collection rule associations on all target resources by using the [built-in policy initiative](azure-monitor-agent-install.md#install-with-azure-policy). Provide the preceding data collection rule as an input parameter.
+2. Install the Azure Monitor agent extension. Deploy data collection rule associations on all target resources by using the [built-in policy initiative](azure-monitor-agent-manage.md#using-azure-policy). Provide the preceding data collection rule as an input parameter.
1. Validate data is flowing as expected via the Azure Monitor agent. Check the **Heartbeat** table for new agent version values. Ensure it matches data flowing through the existing Log Analytics agent. 2. Validate all downstream dependencies like dashboards, alerts, and runbook workers. Workbooks all continue to function now by using data from the new agent. 3. [Uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from the resources. Don't uninstall it if you need to use it for System Center Operations Manager scenarios or other solutions not yet available on the Azure Monitor agent.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
To configure the agent to use private links for network communications with Azur
## Next steps -- [Install the Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.
+- [Install the Azure Monitor agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.4.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.4/applicationinsights-agent-3.2.4.jar) file.
+Download the [applicationinsights-agent-3.2.5.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.5/applicationinsights-agent-3.2.5.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.4.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to your application
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.4.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.5.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.4.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.5.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.4.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you are using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.5.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.4.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.5.jar", "-jar", "<myapp.jar>"]
```
-If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` somewhere before `-jar`, for example:
+If you are using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.4.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.5.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.4.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.4.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.5.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.4.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.4.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.5.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.4.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.5.jar
``` Quotes are not necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.4.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.5.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.4.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.5.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.4.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.5.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.4.jar
+-javaagent:path/to/applicationinsights-agent-3.2.5.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.5.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.4.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.5.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.2.4.jar` to the existing `jv
## WebSphere 8 Open Management Console
-go to **servers > WebSphere application servers > Application servers**, choose the appropriate application servers and click on:
+go to **servers > WebSphere application servers > Application servers**, choose the appropriate application servers and select:
``` Java and Process Management > Process definition > Java Virtual Machine ``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.4.jar
+-javaagent:path/to/applicationinsights-agent-3.2.5.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.4.jar
+-javaagent:path/to/applicationinsights-agent-3.2.5.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
Connection string and role name are the most common settings needed to get start
} ```
-The connection string is required, and the role name is important any time you are sending data
+The connection string is required, and the role name is important anytime you are sending data
from different applications to the same Application Insights resource. You will find more details and additional configuration options below. ## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.4.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.5.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.4.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.5.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.5-BETA, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.5, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.5-BETA, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.5, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
"akka": { "enabled": true },
+ "vertx": {
+ "enabled": true
+ }
} } } ``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
+> Vertx HTTP Library instrumentation is available starting from version 3.2.5
## Metric interval
If you are using the heartbeat metric to trigger alerts, you can increase the fr
## Authentication (preview) > [!NOTE]
-> Authentication feature is available starting from version 3.2.0-BETA
+> Authentication feature is available starting from version 3.2.0
It allows you to configure agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory Authentication. For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
-Starting from 3.2.5-BETA, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
+Starting from 3.2.5, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
the json above (or if you are using the system properties above, you can add `https.proxyUser` and `https.proxyPassword` system properties).
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.4.jar` is located.
+`applicationinsights-agent-3.2.5.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
Please configure specific options based on your needs.
} } }
-```
+```
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
Typically the default Java keystore will already have all of the CA root certifi
2. Once you have the list of certificates, follow these [steps](#steps-to-download-ssl-certificate) to download the SSL certificate that was used to sign the Application Insights endpoint.
- Once you have the certificate downloaded, generate a SHA-1 hash on the certificate using the below command:
+ Once you have the certificate downloaded, generate an SHA-1 hash on the certificate using the below command:
> `keytool -printcert -v -file "your_downloaded_ssl_certificate.cer"` Copy the SHA-1 value and check if this value is present in "temp.txt" file you saved previously. If you are not able to find the SHA-1 value in the temp file, it indicates that the downloaded SSL cert is missing in default Java keystore.
If the Application Insights Java agent detects that you do not have any of the c
Cipher suites come into play before a client application and server exchange information over an SSL/TLS connection. The client application initiates an SSL handshake. Part of that process involves notifying the server which cipher suites it supports. The server receives that information and compares the cipher suites supported by the client application with the algorithms it supports. If it finds a match, the server notifies the client application and a secure connection is established. If it does not find a match, the server refuses the connection. #### How to determine client side cipher suites:
-In this case, the client is the JVM on which your instrumented application is running. Starting from 3.2.5-BETA, Application Insights Java will log a warning message if missing cipher suites could be causing connection failures to one of the service endpoints.
+In this case, the client is the JVM on which your instrumented application is running. Starting from 3.2.5, Application Insights Java will log a warning message if missing cipher suites could be causing connection failures to one of the service endpoints.
If using an earlier version of Application Insights Java, compile and run the following Java program to get the list of supported cipher suites in your JVM:
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
The Azure virtual machine has the following requirements:
- Operating system: Ubuntu 18.04 using Azure Marketplace [image](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic). Custom images are not supported. - Recommended minimum Azure virtual machine sizes: Standard_B2s (2 CPUs, 4 GiB memory) -- Deployed in any Azure region [supported](../agents/azure-monitor-agent-overview.md#supported-regions) by the Azure Monitor agent, and meeting all Azure Monitor agent [prerequisites](../agents/azure-monitor-agent-install.md#prerequisites).
+- Deployed in any Azure region [supported](../agents/azure-monitor-agent-overview.md#supported-regions) by the Azure Monitor agent, and meeting all Azure Monitor agent [prerequisites](../agents/azure-monitor-agent-manage.md#prerequisites).
> [!NOTE] > The Standard_B2s (2 CPUs, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't allocate more than 100 connections to a single virtual machine.
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-troubleshoot.md
To see error messages from the telegraf service, run it manually by using the fo
### mdsd service logs
-Check [prerequisites](../agents/azure-monitor-agent-install.md#prerequisites) for the Azure Monitor agent.
+Check [prerequisites](../agents/azure-monitor-agent-manage.md#prerequisites) for the Azure Monitor agent.
Prior to Azure Monitoring Agent v1.12, mdsd service logs were located in: - `/var/log/mdsd.err`
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Title: Log Analytics workspace data export in Azure Monitor (preview)
-description: Log Analytics data export allows you to continuously export data of selected tables from your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected.
+description: Log Analytics workspace data export in Azure Monitor lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it's collected.
Last updated 12/01/2021
# Log Analytics workspace data export in Azure Monitor (preview)
-Log Analytics workspace data export in Azure Monitor allows you to continuously export data from selected tables in your Log Analytics workspace to an Azure storage account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces.
+Data export in Log Analytics workspace lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces.
## Overview
-Data in Log Analytics is available for the retention period defined in your workspace and used in various experiences provided in Azure Monitor and other Azure services. There are more capabilities that can be met with data export:
-* Comply with tamper protected store requirement -- data can't be altered in Log Analytics once ingested, but can be purged. Export to storage account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected.
-* Integration with Azure services and other tools -- export to event hub in near-real-time to send data to your services and tools at it arrives to Azure Monitor.
-* Keep audit and security data for long time at low cost -- export to storage account in the same region as your workspace, or replicate data to other storage accounts in other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
+Data in Log Analytics is available for the retention period defined in your workspace, and used in various experiences provided in Azure Monitor and Azure services. There are cases where you need to use other tools:
+* Tamper protected store compliance ΓÇô data can't be altered in Log Analytics once ingested, but can be purged. Export to Storage Account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected.
+* Integration with Azure services and other tools ΓÇô export to Event Hubs in near-real-time to send data to your services and tools at it arrives to Azure Monitor.
+* Keep audit and security data for very long time ΓÇô export to Storage Account in the workspace's region, or replicate data to other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including "GRS" and "GZRS".
-Once data export is configured in your Log Analytics workspace, any new data sent to the selected tables in the workspace is automatically exported in near-real-time to your storage account or to your event hub.
+When configuring data export rules in Log Analytics workspace, new ingested data to selected tables in rules is exported to your Storage Account or to your Event Hubs as it arrives.
[![Data export overview](media/logs-data-export/data-export-overview.png "Screenshot of data export flow diagram.")](media/logs-data-export/data-export-overview.png#lightbox)
-All data from included tables is exported without a filter. For example, when you configure a data export rule for *SecurityEvent* table, all data sent to the *SecurityEvent* table is exported starting from the configuration time.
+Data is exported without a filter. For example, when you configure a data export rule for *SecurityEvent* table, all data sent to the *SecurityEvent* table is exported starting from the configuration time.
## Other export options
-Log Analytics workspace data export continuously exports data from a Log Analytics workspace. Other options to export data for particular scenarios include the following:
+Log Analytics workspace data export continuously exports data that is sent to your Log Analytics workspace. There are other options to export data for particular scenarios:
-- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure storage. This method though is subject to [log query limits](../service-limits.md#log-analytics-workspaces), see [Archive data from Log Analytics workspace to Azure storage using Logic App](logs-export-logic-app.md).
+- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage Account. This method though is subject to [log query limits](../service-limits.md#log-analytics-workspaces), see [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md).
- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Limitations -- All tables will be supported in export, but currently limited to those specified in the [supported tables](#supported-tables) section below.
+- All tables will be supported in export, but currently limited to those specified in the [supported tables](#supported-tables) section.
- The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported. - You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled. - Destinations must be in the same region as the Log Analytics workspace.-- Storage account must be unique across rules in workspace.-- Tables names can be no longer than 60 characters when exporting to storage account and 47 characters to event hub. Tables with longer names will not be exported.-- Data export isn't supported in these regions currently:
- - Korea South
- - Jio India Central
- - Government regions
+- Storage Account must be unique across rules in workspace.
+- Tables names can be no longer than 60 characters when exporting to Storage Account and 47 characters to Event Hubs. Tables with longer names will not be exported.
+- Data export isn't supported in Government regions currently
## Data completeness
-Data export is optimized for moving large data volume to your destinations and in certain retry conditions, can include a fraction of duplicated records. The export operation to your destination could fail when ingress limits are reached, see details under [Create or update data export rule](#create-or-update-data-export-rule). Export continues to retry for up to 30 minutes and if destination is unavailable to accept data, data will be discarded until the destination becomes available.
+Data export is optimized for moving large data volume to your destinations, and in certain retry conditions, can include a fraction of duplicated records. The export operation could fail when ingress limits are reached, see details under [Create or update data export rule](#create-or-update-data-export-rule). In such case, a retry continues for up to 30 minutes, and if destination is unavailable yet, data will be discarded until destination becomes available.
## Cost Billing for the Log Analytics Data Export feature is not enabled yet. View more details in [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
Billing for the Log Analytics Data Export feature is not enabled yet. View more
Data export destination must be available before creating export rules in your workspace. Destinations don't have to be in the same subscription as your workspace. When using Azure Lighthouse, it is also possible send data to destinations in another Azure Active Directory tenant.
-### Storage account
+### Storage Account
You need to have 'write' permissions to both workspace and destination to configure data export rule.
-Don't use an existing storage account that has other, non-monitoring data stored in it to better control access to the data and prevent reaching storage ingress rate limit, failures, and latency.
+Don't use an existing Storage Account that has other, non-monitoring data to better control access to the data, and prevent reaching storage ingress rate limit, failures, and latency.
-To send data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Blob storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article, including enabling protected append blobs writes.
+To send data to immutable Storage Account, set the immutable policy for the Storage Account as described in [Set and manage immutability policies for Blob storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article, including enabling protected append blobs writes.
-The storage account must be StorageV1 or above and in the same region as your workspace. If you need to replicate your data to other storage accounts in other regions, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including GRS and GZRS.
+The Storage Account must be StorageV1 or above and in the same region as your workspace. If you need to replicate your data to other Storage Accounts in other regions, you can use any of the [Azure storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region), including "GRS" and "GZRS".
-Data is sent to storage accounts as it reaches Azure Monitor and exported to destinations located in workspace region. A container is created for each table in the storage account with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to a container named *am-SecurityEvent*.
+Data is sent to Storage Accounts as it reaches Azure Monitor and exported to destinations located in workspace region. A container is created for each table in Storage Account, with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would send to a container named *am-SecurityEvent*.
-Blobs are stored in 5-minute folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Append blobs is limited to 50-K writes and could be reached, the naming pattern for blobs in such case would be PT05M_#.json*, where # is incremental blob count.
+Blobs are stored in 5-minute folders in path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Append blobs is limited to 50-K writes and could be reached, and more blobs will be added in folder as: PT05M_#.json*, where # is incremental blob count.
-The storage account data format is in [JSON lines](../essentials/resource-logs-blob-format.md), where each record is delimited by a newline, with no outer records array and no commas between JSON records.
+The format of blobs in Storage Account is in [JSON lines](../essentials/resource-logs-blob-format.md), where each record is delimited by a newline, with no outer records array and no commas between JSON records.
-[![Storage sample data](media/logs-data-export/storage-data.png "Screenshot of data format in blob storage.")](media/logs-data-export/storage-data-expand.png#lightbox)
+[![Storage sample data](media/logs-data-export/storage-data.png "Screenshot of data format in blob.")](media/logs-data-export/storage-data-expand.png#lightbox)
-### Event hub
+### Event Hubs
-You need to have 'write' permissions to both workspace and destination to configure data export rule. The shared access policy for the event hub namespace defines the permissions that the streaming mechanism has. Streaming to event hub requires Manage, Send, and Listen permissions. To update the export rule, you must have the ListKey permission on that Event Hubs authorization rule.
+You need to have 'write' permissions to both workspace and destination to configure data export rule. The shared access policy for the Event Hubs namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the export rule, you must have the ListKey permission on that Event Hubs authorization rule.
-Don't use an existing event hub that has other, non-monitoring data stored in it to better control access to the data and prevent reaching event hub namespace ingress rate limit, failures, and latency.
+Don't use an existing Event Hubs that has, non-monitoring data to prevent reaching Event Hubs namespace ingress rate limit, failures, and latency.
-Data is sent to your event hub as it reaches Azure Monitor and exported to destinations located in workspace region. You can create multiple export rules to the same event hub namespace by providing different `event hub name` in rule.When `event hub name` isn't provided, a default event hub is created for each table that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+Data is sent to your Event Hubs as it reaches Azure Monitor and exported to destinations located in workspace region. You can create multiple export rules to the same Event Hubs namespace by providing different `event hub name` in rule. When `event hub name` isn't provided, default Event Hubs are created for tables that you export with name: *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an Event Hub named: *am-SecurityEvent*. The [number of supported Event Hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules, to different Event Hubs namespaces, or provide an Event Hub name to export all tables to it.
> [!NOTE]
-> - 'Basic' event hub tier is limited--it supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Since data volume to your workspace increases over time and consequence event hub scaling is required, use 'Standard', 'Premium' or 'Dedicated' event hub tiers with **Auto-inflate** feature enabled. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md).
-> - Data export can't reach event hub resources when virtual networks are enabled. You have to enable the **Allow trusted Microsoft services** to bypass this firewall setting in event hub, to grant access to your Event Hubs resources.
+> - 'Basic' Event Hubs namespace tier is limited ΓÇô it supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Since data volume to your workspace increases over time and consequence Event Hubs scaling is required, use 'Standard', 'Premium' or 'Dedicated' Event Hubs tiers with **Auto-inflate** feature enabled. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md).
+> - Data export can't reach Event Hubs resources when virtual networks are enabled. You have to enable the **Allow trusted Microsoft services** to bypass this firewall setting in event hub, to grant access to your Event Hubs.
## Enable data export The following steps must be performed to enable Log Analytics data export. See the following sections for more details on each.
Register-AzResourceProvider -ProviderNamespace Microsoft.insights
``` ### Allow trusted Microsoft services
-If you have configured your storage account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From **Firewalls and virtual networks** for your storage account, select **Allow trusted Microsoft services to access this storage account**.
+If you have configured your Storage Account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From **Firewalls and virtual networks** for your Storage Account, select **Allow trusted Microsoft services to access this Storage Account**.
-[![Storage account firewalls and networks](media/logs-data-export/storage-account-network.png "Screenshot of allow trusted Microsoft services.")](media/logs-data-export/storage-account-network.png#lightbox)
+[![Storage Account firewalls and networks](media/logs-data-export/storage-account-network.png "Screenshot of allow trusted Microsoft services.")](media/logs-data-export/storage-account-network.png#lightbox)
-### Destinations monitoring
+### Monitor destinations
> [!IMPORTANT]
-> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
+> Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [Storage Accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [Event Hubs namespace quota](../../event-hubs/event-hubs-quotas.md).
-**Monitoring storage account**
+**Monitoring Storage Account**
-1. Use separate storage account for export
-2. Configure alert on the metric below:
+1. Use separate Storage Account for export
+2. Configure alert on the metric:
| Scope | Metric Namespace | Metric | Aggregation | Threshold | |:|:|:|:|:| | storage-name | Account | Ingress | Sum | 80% of max ingress per alert evaluation period. For example: limit is 60 Gbps for general-purpose v2 in West US. Threshold is 14,400 Gb per 5-minutes evaluation period | 3. Alert remediation actions
- - Use separate storage account for export that isn't shared with non-monitoring data.
+ - Use separate Storage Account for export that isn't shared with non-monitoring data.
- Azure Storage standard accounts support higher ingress limit by request. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/).
- - Split tables between more storage accounts.
+ - Split tables between more Storage Accounts.
-**Monitoring event hub**
+**Monitoring Event Hubs**
-1. Configure alerts on the [metrics](../../event-hubs/monitor-event-hubs-reference.md) below:
+1. Configure alerts on the [metrics](../../event-hubs/monitor-event-hubs-reference.md):
| Scope | Metric Namespace | Metric | Aggregation | Threshold | |:|:|:|:|:|
- | namespaces-name | Event Hub standard metrics | Incoming bytes | Sum | 80% of max ingress per alert evaluation period. For example, limit is 1 MB/s per unit (TU or PU) and five units used. Threshold is 1200 MB per 5-minutes evaluation period |
- | namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per alert evaluation period. For example, limit is 1000/s per unit (TU or PU) and five units used. Threshold is 1200000 per 5-minutes evaluation period |
- | namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% of request. For example, requests per 5 minutes is 600000. Threshold is 6000 per 5-minutes evaluation period |
+ | namespaces-name | Event Hub standard metrics | Incoming bytes | Sum | 80% of max ingress per alert evaluation period. For example, limit is 1 MB/s per unit ("TU" or "PU") and five units used. Threshold is 1200 MB per 5-minutes evaluation period |
+ | namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per alert evaluation period. For example, limit is 1000/s per unit ("TU" or ""PU") and five units used. Threshold is 1200000 per 5-minutes evaluation period |
+ | namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% of request. For example, requests per 5-minute is 600000. Threshold is 6000 per 5-minute evaluation period |
2. Alert remediation actions
- - Use separate event hub namespace for export that isn't shared with non-monitoring data.
+ - Use separate Event Hubs namespace for export that isn't shared with non-monitoring data.
- Configure [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) feature to automatically scale up and increase the number of throughput units to meet usage needs - Verify increase of throughput units to accommodate data volume - Split tables between more namespaces - Use 'Premium' or 'Dedicated' tiers for higher throughput ### Create or update data export rule
-Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage account must be unique across rules in workspace. Multiple rules can use the same event hub namespace when sending to separate event hubs.
+Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage Account must be unique across rules in workspace. Multiple rules can use the same Event Hubs namespace when sending to separate Event Hubs.
> [!NOTE] > - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported. > - The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported.
-> - Export to storage account - a separate container is created in storage account for each table.
-> - Export to event hub - if event hub name isn't provided, a separate event hub is created for each table. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+> - Export to Storage Account - a separate container is created in Storage Account for each table.
+> - Export to Event Hubs - if Event Hub name isn't provided, a separate Event Hub is created for each table. The [number of supported Event Hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces, or provide an Event Hub name in the rule to export all tables to it.
# [Azure portal](#tab/portal)
Follow the steps, then click **Create**.
# [PowerShell](#tab/powershell)
-Use the following command to create a data export rule to a storage account using PowerShell. A separate container is created for each table.
+Use the following command to create a data export rule to a Storage Account using PowerShell. A separate container is created for each table.
```powershell $storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name' New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $storageAccountResourceId ```
-Use the following command to create a data export rule to a specific event hub using PowerShell. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables.
+Use the following command to create a data export rule to a specific Event Hub using PowerShell. All tables are exported to the provided Event Hub name and can be filtered by "Type" field to separate tables.
```powershell $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name' New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName ```
-Use the following command to create a data export rule to an event hub using PowerShell. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace.
+Use the following command to create a data export rule to an Event Hub using PowerShell. When specific Event Hub name isn't provided, a separate container is created for each table, up to the [number of Event Hubs supported in Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). To export more tables, provide an Event Hub name in rule, or set another rule and export the remaining tables to another Event Hubs namespace.
```powershell $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -Worksp
# [Azure CLI](#tab/azure-cli)
-Use the following command to create a data export rule to a storage account using CLI. A separate container is created for each table.
+Use the following command to create a data export rule to a Storage Account using CLI. A separate container is created for each table.
```azurecli $storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name' az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $storageAccountResourceId ```
-Use the following command to create a data export rule to a specific event hub using CLI. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables.
+Use the following command to create a data export rule to a specific Event Hub using CLI. All tables are exported to the provided Event Hub name and can be filtered by "Type" field to separate tables.
```azurecli $eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name' az monitor log-analytics workspace data-export create --resource-group resourceGroupName --workspace-name workspaceName --name ruleName --tables SecurityEvent Heartbeat --destination $eventHubResourceId ```
-Use the following command to create a data export rule to an event hub using CLI. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace.
+Use the following command to create a data export rule to an Event Hubs using CLI. When specific Event Hub name isn't provided, a separate container is created for each table up to the [number of supported Event Hubs for your Event Hubs tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide Event Hub name to export any number of tables, or set another rule to export the remaining tables to another Event Hubs namespace.
```azurecli $eventHubsNamespacesResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
az monitor log-analytics workspace data-export create --resource-group resourceG
# [REST](#tab/rest)
-Use the following request to create a data export rule a storage account using the REST API. A separate container is created for each table. The request should use bearer token authorization and content type application/json.
+Use the following request to create a data export rule a Storage Account using the REST API. A separate container is created for each table. The request should use bearer token authorization and content type application/json.
```rest PUT https://management.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.operationalInsights/workspaces/<workspace-name>/dataexports/<data-export-name>?api-version=2020-08-01
The body of the request specifies the tables destination. Following is a sample
} ```
-Following is a sample body for the REST request for an event hub.
+Following is a sample body for the REST request for an Event Hubs.
```json {
Following is a sample body for the REST request for an event hub.
} ```
-Following is a sample body for the REST request for an event hub where event hub name is provided. In this case, all exported data is sent to this event hub.
+Following is a sample body for the REST request for an Event Hubs where Event Hub name is provided. In this case, all exported data is sent to it.
```json {
Following is a sample body for the REST request for an event hub where event hub
# [Template](#tab/json)
-Use the following command to create a data export rule to a storage account using Resource Manager template.
+Use the following command to create a data export rule to a Storage Account using Resource Manager template.
``` {
Use the following command to create a data export rule to a storage account usin
} ```
-Use the following command to create a data export rule to an event hub using Resource Manager template. A separate event hub is created for each table.
+Use the following command to create a data export rule to an Event Hubs using Resource Manager template. A separate Event Hub is created for each table.
``` {
Use the following command to create a data export rule to an event hub using Res
} ```
-Use the following command to create a data export rule to a specific event hub using Resource Manager template. All tables are exported to the provided event hub name.
+Use the following command to create a data export rule to a specific Event Hub using Resource Manager template. All tables are exported to it.
``` {
Content-type: application/json
# [Template](#tab/json)
-Export rules can be disabled to let you stop the export when you donΓÇÖt need to retain data for a certain period such as when testing is being performed. Set ```"enable": false``` in template to disable a data export.
+Export rules can be disabled to stop export when testing is performed and you don't need data sent to destination. Set ```"enable": false``` in template to disable a data export.
N/A
If the data export rule includes an unsupported table, the configuration will succeed, but no data will be exported for that table. If the table is later supported, then its data will be exported at that time. ## Supported tables
-Supported tables are currently limited to those specified below. All data from the table will be exported unless limitations are specified. This list is updated as more tables are added.
+All data from the table will be exported unless limitations are specified. This list is updated as more tables are added.
| Table | Limitations | |:|:|
Supported tables are currently limited to those specified below. All data from t
| AADServicePrincipalSignInLogs | | | AADUserRiskEvents | | | ABSBotRequests | |
+| ACRConnectedClientList | |
| ACSAuthIncomingOperations | | | ACSBillingUsage | |
-| ACRConnectedClientList | |
-| ACRConnectedClientList | |
| ACSCallDiagnostics | | | ACSCallSummary | | | ACSChatIncomingOperations | |
Supported tables are currently limited to those specified below. All data from t
| ADTQueryOperation | | | ADXCommand | | | ADXQuery | |
+| AEWAuditLogs | |
+| ATCExpressRouteCircuitIpfix | |
+| AWSCloudTrail | |
+| AWSGuardDuty | |
+| AWSVPCFlow | |
| AegDeliveryFailureLogs | | | AegPublishFailureLogs | |
-| AEWAuditLogs | |
-| AEWComputePipelinesLogs | |
| AgriFoodApplicationAuditLogs | |
-| AgriFoodApplicationAuditLogs | |
-| AgriFoodFarmManagementLogs | |
| AgriFoodFarmManagementLogs | | | AgriFoodFarmOperationLogs | |
-| AgriFoodInsightLogs | |
| AgriFoodJobProcessedLogs | |
-| AgriFoodModelInferenceLogs | |
| AgriFoodProviderAuthLogs | |
-| AgriFoodSatelliteLogs | |
-| AgriFoodWeatherLogs | |
-| Alert | |
+| Alert | Partial support ΓÇô Data ingestion for Zabbix alerts isn't supported. |
| AlertEvidence | | | AlertInfo | |
-| AmlOnlineEndpointConsoleLog | |
| ApiManagementGatewayLogs | | | AppCenterError | | | AppPlatformSystemLogs | |
Supported tables are currently limited to those specified below. All data from t
| AppServiceFileAuditLogs | | | AppServiceHTTPLogs | | | AppServicePlatformLogs | |
-| ATCExpressRouteCircuitIpfix | |
| AuditLogs | | | AutoscaleEvaluationsLog | | | AutoscaleScaleActionsLog | |
-| AWSCloudTrail | |
-| AWSGuardDuty | |
-| AWSVPCFlow | |
-| AZFWApplicationRule | |
-| AZFWApplicationRuleAggregation | |
-| AZFWDnsQuery | |
-| AZFWIdpsSignature | |
-| AZFWNatRule | |
-| AZFWNatRuleAggregation | |
-| AZFWNetworkRule | |
-| AZFWNetworkRuleAggregation | |
| AzureAssessmentRecommendation | | | AzureDevOpsAuditing | | | BehaviorAnalytics | |
-| BlockchainApplicationLog | |
-| BlockchainProxyLog | |
-| CassandraAudit | |
-| CassandraLogs | |
| CDBCassandraRequests | | | CDBControlPlaneRequests | | | CDBDataPlaneRequests | |
Supported tables are currently limited to those specified below. All data from t
| CDBQueryRuntimeStatistics | | | CIEventsAudit | | | CIEventsOperational | |
+| CassandraLogs | |
| CloudAppEvents | | | CommonSecurityLog | | | ComputerGroup | |
Supported tables are currently limited to those specified below. All data from t
| ContainerNodeInventory | | | ContainerServiceLog | | | CoreAzureBackup | |
+| DSMAzureBlobStorageLogs | |
| DatabricksAccounts | | | DatabricksClusters | | | DatabricksDBFS | | | DatabricksInstancePools | | | DatabricksJobs | | | DatabricksNotebook | |
-| DatabricksSecrets | |
| DatabricksSQLPermissions | | | DatabricksSSH | |
+| DatabricksSecrets | |
| DatabricksWorkspace | |
-| DeviceNetworkInfo | |
| DnsEvents | | | DnsInventory | |
-| DSMAzureBlobStorageLogs | |
-| DummyHydrationFact | |
| Dynamics365Activity | | | EmailAttachmentInfo | | | EmailEvents | | | EmailPostDeliveryEvents | | | EmailUrlInfo | |
-| Event | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export.2 |
+| Event | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics extension agent is collected through storage while this path isnΓÇÖt supported in export. |
| ExchangeAssessmentRecommendation | | | FailedIngestion | | | FunctionAppLogs | | | HDInsightAmbariClusterAlerts | | | HDInsightAmbariSystemMetrics | |
-| HDInsightHadoopAndYarnLogs | |
-| HDInsightHadoopAndYarnMetrics | |
| HDInsightHBaseLogs | | | HDInsightHBaseMetrics | |
+| HDInsightHadoopAndYarnLogs | |
+| HDInsightHadoopAndYarnMetrics | |
| HDInsightHiveAndLLAPLogs | | | HDInsightHiveAndLLAPMetrics | | | HDInsightHiveQueryAppStats | |
Supported tables are currently limited to those specified below. All data from t
| KubePodInventory | | | KubeServices | | | LAQueryLogs | |
-| McasShadowItReporting | |
| MCCEventLogs | |
-| MCVPAuditLogs | |
+| MCVPOperationLogs | |
+| McasShadowItReporting | |
| MicrosoftAzureBastionAuditLogs | | | MicrosoftDataShareReceivedSnapshotLog | | | MicrosoftDataShareSentSnapshotLog | |
-| MicrosoftDataShareShareLog | |
| MicrosoftHealthcareApisAuditLogs | | | NWConnectionMonitorPathResult | | | NWConnectionMonitorTestResult | | | OfficeActivity | Partial support in government clouds ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. | | Operation | Partial support ΓÇô some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently. | | Perf | Partial support ΓÇô only windows perf data is currently supported. The Linux perf data is missing in export currently. |
-| PowerBIActivity | |
| PowerBIDatasetsWorkspace | |
-| ProjectActivity | |
| PurviewDataSensitivityLogs | | | PurviewScanStatusLogs | | | SCCMAssessmentRecommendation | | | SCOMAssessmentRecommendation | |
+| SQLAssessmentRecommendation | |
+| SQLSecurityAuditEvents | |
| SecurityAlert | | | SecurityBaseline | | | SecurityBaselineSummary | | | SecurityDetection | |
-| SecurityEvent | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export. |
+| SecurityEvent | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics extension agent is collected through storage while this path isnΓÇÖt supported in export. |
| SecurityIncident | | | SecurityIoTRawEvent | | | SecurityNestedRecommendation | | | SecurityRecommendation | | | SentinelHealth | | | SfBAssessmentRecommendation | |
-| SfBOnlineAssessmentRecommendation | |
| SharePointOnlineAssessmentRecommendation | | | SignalRServiceDiagnosticLogs | | | SigninLogs | |
-| SPAssessmentRecommendation | |
-| SQLAssessmentRecommendation | |
-| SQLSecurityAuditEvents | |
| SucceededIngestion | | | SynapseBigDataPoolApplicationsEnded | | | SynapseBuiltinSqlPoolRequestsEnded | |
Supported tables are currently limited to those specified below. All data from t
| SynapseIntegrationPipelineRuns | | | SynapseIntegrationTriggerRuns | | | SynapseRbacOperations | |
+| SynapseScopePoolScopeJobsEnded | |
+| SynapseScopePoolScopeJobsStateChange | |
| SynapseSqlPoolDmsWorkers | | | SynapseSqlPoolExecRequests | | | SynapseSqlPoolRequestSteps | | | SynapseSqlPoolSqlRequests | | | SynapseSqlPoolWaits | |
-| Syslog | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics Extension agent is collected though storage while this path isnΓÇÖt supported in export. |
+| Syslog | Partial support ΓÇô data arriving from Log Analytics agent (MMA) or Azure Monitor Agent (AMA) is fully supported in export. Data arriving via Diagnostics extension agent is collected through storage while this path isnΓÇÖt supported in export. |
| ThreatIntelligenceIndicator | |
-| UCClient | |
| UCClientUpdateStatus | |
-| UCDeviceAlert | |
-| UCServiceUpdateStatus | |
| Update | Partial support ΓÇô some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently. | | UpdateRunProgress | | | UpdateSummary | | | Usage | |
-| UserAccessAnalytics | |
| UserPeerAnalytics | |
-| Watchlist | |
-| WindowsEvent | |
-| WindowsFirewall | |
-| WireData | Partial support ΓÇô some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently. |
-| WorkloadDiagnosticLogs | |
| WVDAgentHealthStatus | | | WVDCheckpoints | | | WVDConnections | | | WVDErrors | | | WVDFeeds | | | WVDManagement | |
+| Watchlist | |
+| WindowsEvent | |
+| WindowsFirewall | |
+| WireData | Partial support ΓÇô some of the data is ingested through internal services that aren't supported in export. This portion is missing in export currently. |
+| WorkloadDiagnosticLogs | |
## Next steps
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 11/27/2021 Last updated : 01/26/2022
The default pricing for Log Analytics is a **Pay-As-You-Go** model that's based
- Type of data collected from each monitored resource <a name="commitment-tier"></a>
+### Commitment Tiers
In addition to the Pay-As-You-Go model, Log Analytics has **Commitment Tiers**, which can save you as much as 30 percent compared to the Pay-As-You-Go price. With the commitment tier pricing, you can commit to buy data ingestion starting at 100 GB/day at a lower price than Pay-As-You-Go pricing. Any usage above the commitment level (overage) is billed at that same price per GB as provided by the current commitment tier. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. - During the commitment period, you can change to a higher commitment tier (which restarts the 31-day commitment period), but you can't move back to Pay-As-You-Go or to a lower commitment tier until after you finish the commitment period.
Billing for the commitment tiers is done on a daily basis. [Learn more](https://
> [!NOTE] > Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
+<a name="data-size"></a>
+<a name="free-data-types"></a>
In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes). Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor prici
## Viewing Log Analytics usage on your Azure bill
-The easiest way to view your billed usage for a partciular Log Analytics workspace is to go to the **Overview** page of the workspace and click **View Cost** in the upper right corner of the Essentials section at the top of the page. This will launch the Cost Analysis from Azure Cost Management + Billing already scoped to this workspace. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md))
+The easiest way to view your billed usage for a particular Log Analytics workspace is to go to the **Overview** page of the workspace and click **View Cost** in the upper right corner of the Essentials section at the top of the page. This will launch the Cost Analysis from Azure Cost Management + Billing already scoped to this workspace. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md))
Alternatively, you can start in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. here you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Microsoft Defender for Cloud and Microsoft Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
In the downloaded spreadsheet, you can see usage per Azure resource (for example
## Understand your usage and optimizing your pricing tier <a name="understand-your-usage-and-estimate-costs"></a>
-To learn about your usage trends and choose the most cost effective log Analytics pricing tier, use **Log Analytics Usage and Estimated Costs**. This shows how much data is collected by each solution, how much data is being retained, and an estimate of your costs for each pricing tier based on recent data ingestion patterns.
+To learn about your usage trends and choose the most cost-effective log Analytics pricing tier, use **Log Analytics Usage and Estimated Costs**. This shows how much data is collected by each solution, how much data is being retained, and an estimate of your costs for each pricing tier based on recent data ingestion patterns.
:::image type="content" source="media/manage-cost-storage/usage-estimated-cost-dashboard-01.png" alt-text="Usage and estimated costs":::
None of the legacy pricing tiers have regional-based pricing.
> To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier. ## Log Analytics and Microsoft Defender for Cloud
+<a name="ASC"></a>
[Microsoft Defender for servers (Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Cloud [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
To view the effect of the daily cap, it's important to account for the security
let DailyCapResetHour=14; Usage | where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| extend TimeGenerated=datetime_add("hour",-1*DailyCapResetHour,TimeGenerated)
-| where TimeGenerated > startofday(ago(31d))
+| where TimeGenerated > ago(32d)
+| extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime)
+| where StartTime > startofday(ago(31d))
| where IsBillable
-| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(TimeGenerated, 1d) // Quantity in units of MB
+| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(StartTime , 1d) // Quantity in units of MB
| render areachart ``` Add `Update` and `UpdateSummary` data types to the `where Datatype` line when the Update Management solution is not running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance).) - ### Alert when daily cap is reached Azure portal has a visual cue when your data limit threshold is met, but this behavior doesn't necessarily align to how you manage operational issues that require immediate attention. To receive an alert notification, you can create a new alert rule in Azure Monitor. To learn more, see [how to create, view, and manage alerts](../alerts/alerts-metric.md).
After an alert is defined and the limit is reached, an alert is triggered and pe
- Azure Automation runbooks - [Integrated with an external ITSM solution](../alerts/itsmc-definition.md#create-itsm-work-items-from-azure-alerts).
-## Troubleshooting why usage is higher than expected
+## Investigate your Log Analytics usage
+<a name="troubleshooting-why-usage-is-higher-than-expected"></a>
Higher usage is caused by one, or both, of the following: - More nodes than expected sending data to Log Analytics workspace. For information, see the [Understanding nodes sending data](#understanding-nodes-sending-data) section of this article.
Learn more about the [capabilities of the Usage tab](log-analytics-workspace-ins
While this workbook can answer many of the questions without even needing to run a query, to answer more specific questions or do deeper analyses, the queries in the next two sections will help to get you started.
-## Understanding nodes sending data
-
-To understand the number of nodes that are reporting heartbeats from the agent each day in the last month, use this query:
-
-```kusto
-Heartbeat
-| where TimeGenerated > startofday(ago(31d))
-| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
-| render timechart
-```
-The get a count of nodes sending data in the last 24 hours, use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize nodes = dcount(computerName)
-```
-
-To get a list of nodes sending any data (and the amount of data sent by each), use this query:
-
-```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, Computer
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
-```
-
-### Nodes billed by the legacy Per Node pricing tier
-
-The [legacy Per Node pricing tier](#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. Its daily count of nodes would be close to the following query:
-
-```kusto
-find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
-| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
-| extend computerName = tolower(tostring(split(Computer, '.')[0]))
-| where computerName != ""
-| where _IsBillable == true
-| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
-| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
-| sort by day asc
-```
-
-The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query.
-If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the where clause in the above query. Finally, there's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
--
-> [!TIP]
-> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type (see below).
- ## Understanding ingested data volume On the **Usage and Estimated Costs** page, the *Data ingestion per solution* chart shows the total volume of data sent and how much is being sent by each solution. You can determine trends like whether the overall data usage (or usage by a particular solution) is growing, remaining steady, or decreasing.
If needed, you can also parse the **_ResourceId** more fully:
> Some of the fields of the **Usage** data type, while still in the schema, have been deprecated and their values are no longer populated. > These are **Computer**, as well as fields related to ingestion (**TotalBatches**, **BatchesWithinSla**, **BatchesOutsideSla**, **BatchesCapped** and **AverageProcessingTimeMs**).
-## Late-arriving data
+## Tips for reducing data volume
-Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
+This table lists some suggestions for reducing the volume of logs collected.
-To diagnose late-arriving data issues, use the **_TimeReceived** column ([learn more](./log-standard-columns.md#_timereceived)) in addition to the **TimeGenerated** column. **_TimeReceived** is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud. For example, when using the **Usage** records, you have observed high ingested data volumes of **W3CIISLog** data on May 2, 2021, here is a query that identifies the timestamps on this ingested data:
+| Source of high data volume | How to reduce data volume |
+| -- | - |
+| Data Collection Rules | The [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. |
+| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
+| Microsoft Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](../../sentinel/azure-sentinel-billing.md) about Sentinel costs and billing. |
+| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier). <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for: <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)). <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10)). <br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10)). <br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10)). <br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10)). <br> - audit removable storage. |
+| Performance counters | Change the [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection. <br> - Reduce the number of performance counters. |
+| Event logs | Change the [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected. <br> - Collect only required event levels. For example, do not collect *Information* level events. |
+| Syslog | Change the [syslog configuration](../agents/data-sources-syslog.md) to: <br> - Reduce the number of facilities collected. <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events. |
+| AzureDiagnostics | Change the [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources that send logs to Log Analytics. <br> - Collect only required logs. |
+| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
+| Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume). |
+| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
+## Create an alert when data collection is high
-```Kusto
-W3CIISLog
-| where TimeGenerated > datetime(1970-01-01)
-| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
-| where _IsBillable == true
-| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
-| sort by TimeGenerated asc
-```
+This section describes how to create an alert when the data volume in the last 24 hours exceeded a specified amount, using Azure Monitor [Log Alerts](../alerts/alerts-unified-log.md).
-The `where TimeGenerated > datetime(1970-01-01)` statement is present only to provide the clue to the Log Analytics user interface to look over all data.
+To alert if the billable data volume ingested in the last 24 hours was greater than 50 GB:
+
+- **Define alert condition** specify your Log Analytics workspace as the resource target.
+- **Alert criteria** specify the following:
+ - **Signal Name** select **Custom log search**
+ - **Search query** to `Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50`.
+ - **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
+ - **Time period** of *1440* minutes and **Alert frequency** to every *1440* minutes to run once a day.
+- **Define alert details** specify the following:
+ - **Name** to *Billable data volume greater than 50 GB in 24 hours*
+ - **Severity** to *Warning*
+
+To be notified when the log alert matches criteria, specify an existing or create a new [action group](../alerts/action-groups.md).
+
+When you receive an alert, use the steps in the above sections about how to troubleshoot why usage is higher than expected.
## Querying for common data types
To dig deeper into the source of data for a particular data type, here are some
+ **AzureDiagnostics** data type - `AzureDiagnostics | summarize AggregatedValue = count() by ResourceProvider, ResourceId`
-## Tips for reducing data volume
+
+## Understanding nodes sending data
-This table lists some suggestions for reducing the volume of logs collected.
+To understand the number of nodes that are reporting heartbeats from the agent each day in the last month, use this query:
-| Source of high data volume | How to reduce data volume |
-| -- | - |
-| Data Collection Rules | The [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) uses Data Collection Rules to manage the collection of data. You can [limit the collection of data](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) using custom XPath queries. |
-| Container Insights | [Configure Container Insights](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to collect only the data you required. |
-| Microsoft Sentinel | Review any [Sentinel data sources](../../sentinel/connect-data-sources.md) that you recently enabled as sources of additional data volume. [Learn more](../../sentinel/azure-sentinel-billing.md) about Sentinel costs and billing. |
-| Security events | Select [common or minimal security events](../../security-center/security-center-enable-data-collection.md#data-collection-tier). <br> Change the security audit policy to collect only needed events. In particular, review the need to collect events for: <br> - [audit filtering platform](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772749(v=ws.10)). <br> - [audit registry](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941614(v%3dws.10)). <br> - [audit file system](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772661(v%3dws.10)). <br> - [audit kernel object](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd941615(v%3dws.10)). <br> - [audit handle manipulation](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd772626(v%3dws.10)). <br> - audit removable storage. |
-| Performance counters | Change the [performance counter configuration](../agents/data-sources-performance-counters.md) to: <br> - Reduce the frequency of collection. <br> - Reduce the number of performance counters. |
-| Event logs | Change the [event log configuration](../agents/data-sources-windows-events.md) to: <br> - Reduce the number of event logs collected. <br> - Collect only required event levels. For example, do not collect *Information* level events. |
-| Syslog | Change the [syslog configuration](../agents/data-sources-syslog.md) to: <br> - Reduce the number of facilities collected. <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events. |
-| AzureDiagnostics | Change the [resource log collection](../essentials/diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources that send logs to Log Analytics. <br> - Collect only required logs. |
-| Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
-| Application Insights | Review options for [managing Application Insights data volume](../app/pricing.md#managing-your-data-volume). |
-| [SQL Analytics](../insights/azure-sql.md) | Use [Set-AzSqlServerAudit](/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
+```kusto
+Heartbeat
+| where TimeGenerated > startofday(ago(31d))
+| summarize nodes = dcount(Computer) by bin(TimeGenerated, 1d)
+| render timechart
+```
+The get a count of nodes sending data in the last 24 hours, use this query:
-### Getting nodes as billed in the Per Node pricing tier
+```kusto
+find where TimeGenerated > ago(24h) project Computer
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| summarize nodes = dcount(computerName)
+```
-To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending **billed data types** (some data types are free). To do this, use the [_IsBillable property](./log-standard-columns.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the count of computers with billed data per hour (which is the granularity at which nodes are counted and billed):
+To get a list of nodes sending any data (and the amount of data sent by each), use this query:
```kusto
-find where TimeGenerated > ago(24h) project Computer, TimeGenerated
+find where TimeGenerated > ago(24h) project _BilledSize, Computer
| extend computerName = tolower(tostring(split(Computer, '.')[0])) | where computerName != ""
-| summarize billableNodes=dcount(computerName) by bin(TimeGenerated, 1h) | sort by TimeGenerated asc
+| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
```
+> [!TIP]
+> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
+
+### Nodes billed by the legacy Per Node pricing tier
+
+The [legacy Per Node pricing tier](#legacy-pricing-tiers) bills for nodes with hourly granularity and also doesn't count nodes that are only sending a set of security data types. To get a list of computers that will be billed as nodes if the workspace is in the legacy Per Node pricing tier, look for nodes that are sending **billed data types** (some data types are free). To do this, use the [_IsBillable property](./log-standard-columns.md#_isbillable) and use the leftmost field of the fully qualified domain name. This returns the count of computers with billed data per hour:
+
+```kusto
+find where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) project Computer, _IsBillable, Type, TimeGenerated
+| where Type !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
+| extend computerName = tolower(tostring(split(Computer, '.')[0]))
+| where computerName != ""
+| where _IsBillable == true
+| summarize billableNodesPerHour=dcount(computerName) by bin(TimeGenerated, 1h)
+| summarize billableNodesPerDay = sum(billableNodesPerHour)/24., billableNodeMonthsPerDay = sum(billableNodesPerHour)/24./31. by day=bin(TimeGenerated, 1d)
+| sort by day asc
+```
+
+The number of units on your bill is in units of node months, which is represented by `billableNodeMonthsPerDay` in the query.
+If the workspace has the Update Management solution installed, add the **Update** and **UpdateSummary** data types to the list in the where clause in the above query. Finally, there's some additional complexity in the actual billing algorithm when solution targeting is used that's not represented in the above query.
+
+> [!TIP]
+> Use these `find` queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-performance-pane) to execute. If you don't need results **per computer**, then query on the **Usage** data type.
+
### Getting Security and Automation node counts To see the number of distinct Security nodes, you can use the query:
This query isn't an exact replication of how usage is calculated, but it provide
> [!NOTE] > To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
-## Create an alert when data collection is high
-
-This section describes how to create an alert when the data volume in the last 24 hours exceeded a specified amount, using Azure Monitor [Log Alerts](../alerts/alerts-unified-log.md).
-
-To alert if the billable data volume ingested in the last 24 hours was greater than 50 GB:
--- **Define alert condition** specify your Log Analytics workspace as the resource target.-- **Alert criteria** specify the following:
- - **Signal Name** select **Custom log search**
- - **Search query** to `Usage | where IsBillable | summarize DataGB = sum(Quantity / 1000.) | where DataGB > 50`.
- - **Alert logic** is **Based on** *number of results* and **Condition** is *Greater than* a **Threshold** of *0*
- - **Time period** of *1440* minutes and **Alert frequency** to every *1440* minutes to run once a day.
-- **Define alert details** specify the following:
- - **Name** to *Billable data volume greater than 50 GB in 24 hours*
- - **Severity** to *Warning*
+## Late-arriving data
-To be notified when the log alert matches criteria, specify an existing or create a new [action group](../alerts/action-groups.md).
+Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
-When you receive an alert, use the steps in the above sections about how to troubleshoot why usage is higher than expected.
+To diagnose late-arriving data issues, use the **_TimeReceived** column ([learn more](./log-standard-columns.md#_timereceived)) in addition to the **TimeGenerated** column. **_TimeReceived** is the time when the record was received by the Azure Monitor ingestion point in the Azure cloud. For example, when using the **Usage** records, you have observed high ingested data volumes of **W3CIISLog** data on May 2, 2021, here is a query that identifies the timestamps on this ingested data:
+```Kusto
+W3CIISLog
+| where TimeGenerated > datetime(1970-01-01)
+| where _TimeReceived >= datetime(2021-05-02) and _TimeReceived < datetime(2021-05-03)
+| where _IsBillable == true
+| summarize BillableDataMB = sum(_BilledSize)/1.E6 by bin(TimeGenerated, 1d)
+| sort by TimeGenerated asc
+```
+
+The `where TimeGenerated > datetime(1970-01-01)` statement is present only to provide the clue to the Log Analytics user interface to look over all data.
+
## Data transfer charges using Log Analytics Sending data to Log Analytics might incur data bandwidth charges. However, that's limited to Virtual Machines where a Log Analytics agent is installed and doesn't apply when using Diagnostics settings or with other connectors that are built in to Microsoft Sentinel. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Inbound data transfer is free. However, this charge is very small compared to the costs for Log Analytics data ingestion. So, controlling costs for Log Analytics needs to focus on your [ingested data volume](#understanding-ingested-data-volume).
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
Be careful to not send data to Logs because it would be redundant with the data
You can install an Azure Monitor agent on individual machines by using the same methods for Azure virtual machines and Azure Arc-enabled servers. These methods include onboarding individual machines with the Azure portal or Resource Manager templates or enabling machines at scale by using Azure Policy. For hybrid machines that can't use Azure Arc-enabled servers, install the agent manually.
-To create a DCR and deploy the Azure Monitor agent to one or more agents by using the Azure portal, see [Create rule and association in the Azure portal](../agents/data-collection-rule-azure-monitor-agent.md). Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-install.md). To create a policy that automatically deploys the agent and DCR to any new machines as they're created, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md).
+To create a DCR and deploy the Azure Monitor agent to one or more agents by using the Azure portal, see [Create rule and association in the Azure portal](../agents/data-collection-rule-azure-monitor-agent.md). Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-manage.md). To create a policy that automatically deploys the agent and DCR to any new machines as they're created, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md).
## Next steps
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
- [How to troubleshoot issues with the Log Analytics agent for Linux](agents/agent-linux-troubleshoot.md) - [Overview of Azure Monitor agents](agents/agents-overview.md)-- [Install the Azure Monitor agent](agents/azure-monitor-agent-install.md)
+- [Install the Azure Monitor agent](agents/azure-monitor-agent-manage.md)
### Alerts
azure-netapp-files Application Volume Group Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/application-volume-group-disaster-recovery.md
na Previously updated : 12/22/2021 Last updated : 01/26/2022 # Add volumes for an SAP HANA system as a DR system using cross-region replication
The following table summarizes the replication schedule options. It also describ
| Volume type | Default replication schedule | Available options | Notes | ||||| | Data | Daily | Daily, hourly | The choice you select impacts Recover Time Objective (RTO) and the amount of transferred data. |
-| Log | - | - | Log volumes are not replicated. |
+| Log | - | - | Log volumes aren't replicated. |
| SAP shared | Every 10 minutes | Every 10 minutes, hourly, daily | You should choose a schedule based on your SLA requirements and the data stored in the shared volume. | | Data-backup | Daily | Daily, weekly | Replicating the data-backup volumes is optional. | | Log-backup | Every 10 minutes | Every 10 minutes | This setting impacts Recover Point Objective (RPO). |
The following example adds volumes to an SAP HANA system. The system serves as a
* **Group name**: The volume group name. * **SAP node memory**:
- This value defines the size of the SAP HANA database on the host. It is used to calculate the required volume size and throughput.
+ This value defines the size of the SAP HANA database on the host. It's used to calculate the required volume size and throughput.
* **Capacity overhead (%)**: When you use snapshots for data protection, you need to plan for extra capacity. This field will add additional size (%) for the data volume. You can estimate this value by using `"change rate per day" X "number of days retention"`. * **Single-host**: Select this option for an SAP HANA single-host system or the first host for a multiple-host system. Only the shared, log-backup, and data-backup volumes will be created with the first host. * **Multiple-host**:
- Select this option if you are adding additional hosts to a multiple-hosts HANA system.
+ Select this option if you're adding additional hosts to a multiple-hosts HANA system.
* **Disaster recover destination**: Select this option to create volumes for a HANA system as a DR site using [cross-region replication](cross-region-replication-introduction.md).
The following example adds volumes to an SAP HANA system. The system serves as a
* **Proximity placement group (PPG)**: Specifies that the data and shared volumes are to be created close to the disaster recovery VMs.
- Even if you do not need the VMΓÇÖs for replication, you need to start at least one VM to anchor the PPG while provisioning the volumes.
+ Even if you don't need the VMΓÇÖs for replication, you need to start at least one VM to anchor the PPG while provisioning the volumes.
* **Capacity pool**: All volumes will be placed in a single manual QoS capacity pool. If you want to create the log-backup and data-backup volumes in a separate capacity pool, you can choose not to add those volumes to the volume group.
The following example adds volumes to an SAP HANA system. The system serves as a
The Volumes tab also displays the volume type:
- * **DP** - Indicates destination in the cross-region replication setting. Volumes of this type are not online but in replication mode.
+ * **DP** - Indicates destination in the cross-region replication setting. Volumes of this type aren't online but in replication mode.
* **RW** - Indicates that reads and writes are allowed.
- The default type for the log volume is RW, and the setting cannot be changed.
+ The default type for the log volume is `RW`, and the setting can't be changed.
- The default type for the data, shared, and log-backup volumes is DP, and the setting cannot be changed.
+ The default type for the data, shared, and log-backup volumes is `DP`, and the setting can't be changed.
The default type for the data-backup volume is DP, but this setting can be changed to RW.
The following example adds volumes to an SAP HANA system. The system serves as a
2. For each source volume, click **Replication** and then **Authorize**. Paste the **Resource ID** of each corresponding destination volume.
+## Setup options for replicating an SAP HANA database using HANA system replication for HA
+
+In some situations, you might want to combine an HA setup of HANA system replication with a disaster-recovery (DR) setup using cross-region replication (CRR). Depending on the specific usage pattern and service-level agreement (SLA), two setup options for replication are possible. This section describes the options.
+
+### Replicate only the primary HANA database volumes
+
+In this scenario, you typically donΓÇÖt change roles for primary and secondary systems. A takeover is done only in an emergency case. As such, the application-consistent snapshot backups required for CRR are taken mostly on the primary host. This is the case because only the primary HANA database can be used to create a backup.
+
+The following diagram describes this scenario:
+
+[ ![Diagram that shows replication for only the primary HANA database volumes.](../media/azure-netapp-files/replicate-only-primary-database-volumes.png) ](../media/azure-netapp-files/replicate-only-primary-database-volumes.png#lightbox)
+
+In this scenario, a DR setup must include only the volumes of the primary HANA system. With the daily replication of the primary data volume and the log backups of both the primary and secondary systems, the system can be recovered at the DR site. In the diagram, a single volume is used for the log backups of the primary and secondary systems.
+
+In case of a takeover by the secondary HSR host, the backups taken in the secondary system wonΓÇÖt be replicated, but log backups of the secondary will continue to be replicated. If a disaster happens, the system at the DR site can still be recovered using the old snapshot backup from the former primary and the replicated log backups from both hosts. RTO will increase because more logs are to be recovered, depending on how long the HSR pair will run in the takeover mode. If the takeover mode is significantly longer and RTO becomes a problem, you need to set up a new CRR replication including the data volume of the secondary system.
+
+The workflow for this scenario is identical to the [Add volumes](#add-volumes) workflow.
+
+### Replicate both primary and secondary HANA database volumes
+
+For reasons other than HA, you might want to periodically switch roles between the primary and secondary HANA systems. In this scenario, applications-consistent backups must be created on both HANA hosts.
+
+The following diagram describes this scenario:
+
+[ ![Diagram that shows replication for both the primary and the secondary HANA database volumes.](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png) ](../media/azure-netapp-files/replicate-both-primary-secondary-database-volumes.png#lightbox)
+
+In this scenario, you might want to replicate both sets of volumes from the primary and secondary HANA systems as shown in the diagram.
+
+To create the volumes for the secondary replication target, the naming convention will be adapted. To distinguish between the replication of the primary and secondary database, the prefix will change from `DR` to `DR2` for the secondary HANA system. Except this name change, the workflow is identical to the [Add volumes](#add-volumes) workflow.
+
+> [!NOTE]
+> For a detailed discussion of a DR solution for HANA with Azure NetApp Files, see [NetApp technical report TR-4891: SAP HANA disaster recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/backup/saphana-dr-anf_data_protection_overview_overview.html). This technical report provides detailed background and examples about using CRR for SAP HANA on Azure NetApp Files.
+ ## Next steps * [Understand Azure NetApp Files application volume group for SAP HANA](application-volume-group-introduction.md)
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-release-notes.md
This page lists major changes made to AzAcSnap to provide new functionality or r
## Jan-2022
+### AzAcSnap v5.1 Preview (Build: 20220125.85030)
+ AzAcSnap v5.1 Preview (Build: 20220125.85030) has been released with the following new features: - Oracle Database support
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 12/07/2021 Last updated : 01/26/2022 # Guidelines for Azure NetApp Files network planning Network architecture planning is a key element of designing any application infrastructure. This article helps you design an effective network architecture for your workloads to benefit from the rich capabilities of Azure NetApp Files.
-Azure NetApp Files volumes are designed to be contained in a special purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md) within your Azure Virtual Network. Therefore, you can access the volumes directly from within Azure over VNet peering or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway) as necessary. The subnet is dedicated to Azure NetApp Files and there is no connectivity to the Internet.
+Azure NetApp Files volumes are designed to be contained in a special purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md) within your Azure Virtual Network. Therefore, you can access the volumes directly from within Azure over VNet peering or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway). The subnet is dedicated to Azure NetApp Files and there's no connectivity to the Internet.
## Configurable network features
- The [**Standard network features**](configure-network-features.md) configuration for Azure NetApp Files is available for public preview. After registering for this feature with your subscription, you can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features are not supported, the volume defaults to using the Basic network features.
+ The [**Standard network features**](configure-network-features.md) configuration for Azure NetApp Files is available for public preview. After registering for this feature with your subscription, you can create new volumes choosing *Standard* or *Basic* network features in supported regions. In regions where the Standard network features aren't supported, the volume defaults to using the Basic network features.
* ***Standard*** Selecting this setting enables higher IP limits and standard VNet features such as [network security groups](../virtual-network/network-security-groups-overview.md) and [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) on delegated subnets, and additional connectivity patterns as indicated in this article.
The following table describes whatΓÇÖs supported for each network features confi
| Features | Standard network features | Basic network features | ||||
-| The number of IPs in use in a VNet with Azure NetApp Files (including immediately peered VNets) | [Standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 |
-| ANF Delegated subnets per VNet | 1 | 1 |
+| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | [Standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 |
+| Azure NetApp Files delegated subnets per VNet | 1 | 1 |
| [Network Security Groups](../virtual-network/network-security-groups-overview.md) (NSGs) on Azure NetApp Files delegated subnets | Yes | No | | [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) (UDRs) on Azure NetApp Files delegated subnets | Yes | No | | Connectivity to [Private Endpoints](../private-link/private-endpoint-overview.md) | No | No |
Subnets segment the virtual network into separate address spaces that are usable
Subnet delegation gives explicit permissions to the Azure NetApp Files service to create service-specific resources in the subnet. It uses a unique identifier in deploying the service. In this case, a network interface is created to enable connectivity to Azure NetApp Files.
-If you use a new VNet, you can create a subnet and delegate the subnet to Azure NetApp Files by following instructions in [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). You can also delegate an existing empty subnet that is not already delegated to other services.
+If you use a new VNet, you can create a subnet and delegate the subnet to Azure NetApp Files by following instructions in [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). You can also delegate an existing empty subnet that's not delegated to other services.
-If the VNet is peered with another VNet, you cannot expand the VNet address space. For that reason, the new delegated subnet needs to be created within the VNet address space. If you need to extend the address space, you must delete the VNet peering before expanding the address space.
+If the VNet is peered with another VNet, you can't expand the VNet address space. For that reason, the new delegated subnet needs to be created within the VNet address space. If you need to extend the address space, you must delete the VNet peering before expanding the address space.
### UDRs and NSGs
User-defined routes (UDRs) and Network security groups (NSGs) are only supported
If the subnet has a combination of volumes with the Standard and Basic network features (or for existing volumes not registered for the feature preview), UDRs and NSGs applied on the delegated subnets will only apply to the volumes with the Standard network features.
-Configuring user-defined routes (UDRs) on the source VM subnets with address prefix of delegated subnet and next hop as NVA is not supported for volumes with the Basic network features. Such a setting will result in connectivity issues.
+Configuring user-defined routes (UDRs) on the source VM subnets with address prefix of delegated subnet and next hop as NVA isn't supported for volumes with the Basic network features. Such a setting will result in connectivity issues.
## Azure native environments
The following diagram illustrates an Azure-native environment:
### Local VNet
-A basic scenario is to create or connect to an Azure NetApp Files volume from a virtual machine (VM) in the same VNet. For VNet 2 in the diagram above, Volume 1 is created in a delegated subnet and can be mounted on VM 1 in the default subnet.
+A basic scenario is to create or connect to an Azure NetApp Files volume from a VM in the same VNet. For VNet 2 in the diagram, Volume 1 is created in a delegated subnet and can be mounted on VM 1 in the default subnet.
### VNet peering
If you have additional VNets in the same region that need access to each other
Consider VNet 2 and VNet 3 in the diagram above. If VM 1 needs to connect to VM 2 or Volume 2, or if VM 2 needs to connect to VM 1 or Volume 1, then you need to enable VNet peering between VNet 2 and VNet 3.
-Additionally, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2 but it cannot connect to resources in VNet 3, unless VNet 1 and VNet 3 are peered.
+Also, consider a scenario where VNet 1 is peered with VNet 2, and VNet 2 is peered with VNet 3 in the same region. The resources from VNet 1 can connect to resources in VNet 2, but it can't connect to resources in VNet 3 unless VNet 1 and VNet 3 are peered.
-In the diagram above, although VM 3 can connect to Volume 1, VM 4 cannot connect to Volume 2. The reason for this is that the spoke VNets are not peered, and _transit routing is not supported over VNet peering_.
+In the diagram above, although VM 3 can connect to Volume 1, VM 4 can't connect to Volume 2. The reason for this is that the spoke VNets aren't peered, and _transit routing isn't supported over VNet peering_.
## Hybrid environments
In the topology illustrated above, the on-premises network is connected to a hub
* On-premises resources VM 1 and VM 2 can connect to Volume 2 or Volume 3 over a site-to-site VPN and regional VNet peering. * VM 3 in the hub VNet can connect to Volume 2 in spoke VNet 1 and Volume 3 in spoke VNet 2. * VM 4 from spoke VNet 1 and VM 5 from spoke VNet 2 can connect to Volume 1 in the hub VNet.
-* VM 4 in spoke VNet 1 cannot connect to Volume 3 in spoke VNet 2. Also, VM 5 in spoke VNet2 cannot connect to Volume 2 in spoke VNet 1. This is the case because the spoke VNets are not peered and _transit routing is not supported over VNet peering_.
-* In the above architecture if there is a gateway in the spoke VNET as well, the connectivity to the ANF volume from on-prem connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
+* VM 4 in spoke VNet 1 can't connect to Volume 3 in spoke VNet 2. Also, VM 5 in spoke VNet2 can't connect to Volume 2 in spoke VNet 1. This is the case because the spoke VNets aren't peered and _transit routing isn't supported over VNet peering_.
+* In the above architecture if there's a gateway in the spoke VNet as well, the connectivity to the ANF volume from on-prem connecting over the gateway in the Hub will be lost. By design, preference would be given to the gateway in the spoke VNet and so only machines connecting over that gateway can connect to the ANF volume.
## Next steps
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 11/09/2021 Last updated : 01/26/2022 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per subscription | 500 | Yes | | Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No |
-| Number of subnets delegated to Azure NetApp Files (Microsoft.NetApp/volumes) per Azure Virtual Network | 1 | No |
+| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | 1 | No |
| Number of used IPs in a VNet (including immediately peered VNets) with Azure NetApp Files | 1000 | No | | Minimum size of a single capacity pool | 4 TiB | No | | Maximum size of a single capacity pool | 500 TiB | No |
The following table describes resource limits for Azure NetApp Files:
| Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | | Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
-| Maximum number of files ([maxfiles](#maxfiles)) per volume | 100 million | Yes |
+| Maximum number of files ([`maxfiles`](#maxfiles)) per volume | 100 million | Yes |
| Maximum number of export policy rules per volume | 5 | No | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No |
For more information, see [Capacity management FAQs](faq-capacity-management.md)
You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB).
-For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files containing non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
+For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
Examples:
File: 'tmp1'
Size: 4096 Blocks: 8 IO Block: 65536 directory ```
-## Maxfiles limits <a name="maxfiles"></a>
+## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *maxfiles*. The maxfiles limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The maxfiles limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The maxfiles limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 20 million files per TiB of provisioned volume size.
-The service dynamically adjusts the maxfiles limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a maxfiles limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the maxfiles limit based on the following rules:
+The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 20 million. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
-| Volume size (quota) | Automatic readjustment of the maxfiles limit |
+| Volume size (quota) | Automatic readjustment of the `maxfiles` limit |
|-|-| | <= 1 TiB | 20 million | | > 1 TiB but <= 2 TiB | 40 million |
The service dynamically adjusts the maxfiles limit for a volume based on its pro
| > 3 TiB but <= 4 TiB | 80 million | | > 4 TiB | 100 million |
-If you have already allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the maxfiles (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the maxfiles limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
-You can increase the maxfiles limit to 500 million if your volume quota is at least 20 TiB. <!-- ANF-11854 -->
+You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
## Request limit increase
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 01/14/2022 Last updated : 01/27/2022 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
* The following table describes the Time to Live (TTL) settings for the LDAP cache. You need to wait until the cache is refreshed before trying to access a file or directory through a client. Otherwise, an access or permission denied message appears on the client.
- | Error condition | Resolution |
- |-|-|
| Cache | Default Timeout |
+ |-|-|
| Group membership list | 24-hour TTL | | Unix groups | 24-hour TTL, 1-minute negative TTL | | Unix users | 24-hour TTL, 1-minute negative TTL |
azure-portal Per Vm Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/per-vm-quota-requests.md
Title: Increase VM-family vCPU quotas description: Learn how to request an increase in the vCPU quota limit for a VM family in the Azure portal, which increases the total regional vCPU limit by the same amount. Previously updated : 11/15/2021 Last updated : 1/26/2022
To request a standard vCPU quota increase per VM family from **Help + support**,
:::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
-From there, follow the steps as described above to complete your quota increase request.
+From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
## Increase multiple VM-family CPU quotas in one request
azure-portal Regional Quota Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/regional-quota-requests.md
Title: Increase regional vCPU quotas description: Learn how to request an increase in the vCPU quota limit for a region in the Azure portal. Previously updated : 11/15/2021 Last updated : 1/26/2022
To request a standard vCPU quota increase per VM family from **Help + support**,
:::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
-From there, follow the steps as described above to complete your regional quota increase request.
+From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
## Next steps
azure-portal Spot Quota https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/spot-quota.md
Title: Increase spot vCPU quotas description: Learn how to request increases for spot vCPU quotas in the Azure portal. Previously updated : 11/15/2021 Last updated : 1/26/2022
To request a spot vCPU quota increase from **Help + support**, create a new supp
:::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal.":::
-From there, follow the steps as described above to complete your spot quota increase request.
+From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request).
## Next steps
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply to [Azure role-based access control (Azure RBAC)](../
[!INCLUDE [signalr-service-limits](../../../includes/signalr-service-limits.md)]
+## Azure Virtual Desktop Service limits
++ ## Azure VMware Solution limits [!INCLUDE [azure-vmware-solutions-limits](../../azure-vmware/includes/azure-vmware-solutions-limits.md)]
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
Last updated 11/24/2020
This article describes what's new and what has changed with every new build of Azure SQL Edge.
+## Azure SQL Edge 1.0.5
+
+SQL engine build 15.0.2000.1562
+
+### What's new?
+
+- Security bug fixes
+ ## Azure SQL Edge 1.0.4 SQL engine build 15.0.2000.1559
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Previously updated : 01/24/2022 Last updated : 01/26/2022 # Tutorial: Add an Azure SQL Database elastic pool to a failover group
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Previously updated : 01/24/2022 Last updated : 01/26/2022 # Tutorial: Add an Azure SQL Database to an autofailover group
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use Azure CLI to add a database to a failover group
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to add an Azure SQL Database elastic pool to a failover group
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to configure SQL Database auditing and Advanced Threat Protection
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/backup-database-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to backup an Azure SQL single database to an Azure storage container
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to copy a database in Azure SQL Database to a new server
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use Azure CLI to create a single database and configure a firewall rule
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
Previously updated : 01/18/2022 Last updated : 01/26/2022 # Use CLI to import a BACPAC file into a database in SQL Database
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use the Azure CLI to monitor and scale a single database in Azure SQL Database
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use Azure CLI to move a database in SQL Database in a SQL elastic pool
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-cli.md
Previously updated : 01/18/2022 Last updated : 01/26/2022 # Use CLI to restore a single database in Azure SQL Database to an earlier point in time
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/scale-pool-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use the Azure CLI to scale an elastic pool in Azure SQL Database
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to configure active geo-replication for a single database in Azure SQL Database
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to configure a failover group for a group of databases in Azure SQL Database
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
Previously updated : 01/17/2022 Last updated : 01/26/2022 # Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Previously updated : 01/24/2022 Last updated : 01/26/2022 # Quickstart: Create an Azure SQL Database single database
azure-sql Doc Changes Updates Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/doc-changes-updates-known-issues.md
ms.devlang: Previously updated : 09/24/2021 Last updated : 01/25/2022 # Known issues with Azure SQL Managed Instance [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
This article lists the currently known issues with [Azure SQL Managed Instance](
|Issue |Date discovered |Status |Date resolved | |||||
+|[Querying external table fails with 'not supported' error message](#querying-external-table-fails-with-not-supported-error-message)|Jan 2022|Has Workaround||
|[When using SQL Server authentication, usernames with '@' are not supported](#when-using-sql-server-authentication-usernames-with--are-not-supported)|Oct 2021||| |[Misleading error message on Azure portal suggesting recreation of the Service Principal](#misleading-error-message-on-azure-portal-suggesting-recreation-of-the-service-principal)|Sep 2021||| |[Changing the connection type does not affect connections through the failover group endpoint](#changing-the-connection-type-does-not-affect-connections-through-the-failover-group-endpoint)|Jan 2021|Has Workaround||
The `@query` parameter in the [sp_send_db_mail](/sql/relational-databases/system
## Has workaround
+### Querying external table fails with not supported error message
+Querying external table may fail with generic error message "_Queries over external tables are not supported with the current service tier or performance level of this database. Consider upgrading the service tier or performance level of the database_". The only type of external table supported in Azure SQL Managed Instance are PolyBase external tables (in preview). To allow queries on PolyBase external tables, you need to enable PolyBase on managed instance by running sp_configure command.
+
+External tables related to [Elastic Query](../database/elastic-query-overview.md) feature of Azure SQL Database are [not supported](../database/features-comparison.md#features-of-sql-database-and-sql-managed-instance) in SQL Managed Instance, but creating and querying them wasn't explicitly blocked. With support for PolyBase external tables, new checks have been introduced, blocking querying of _any_ type of external table in managed instance unless PolyBase is enabled.
+
+If you're using unsupported Elastic Query external tables to query data in Azure SQL Database or Azure Synapse from your managed instance, you should use Linked Server feature instead. To establish Linked Server connection from SQL Managed Instance to SQL Database, please follow instructions from [this article](https://techcommunity.microsoft.com/t5/azure-database-support-blog/lesson-learned-63-it-is-possible-to-create-linked-server-in/ba-p/369168). To establish Linked Server connection from SQL Managed Instance to SQL Synapse, check [step-by-step instructions](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance/#how-to-use-linked-servers). Since configuring and testing Linked Server connection takes some time, you can use a workaround as a temporary solution to enable querying external tables related to Elastic Query feature:
+
+**Workaround**: Execute the following commands (once per instance) that will enable queries on external tables:
+
+```sql
+sp_configure 'polybase enabled', 1
+go
+reconfigure
+go
+```
+ ### Changing the connection type does not affect connections through the failover group endpoint
-If an instance participates in an [auto-failover group](../database/auto-failover-group-overview.md), changing the instance's [connection type](../managed-instance/connection-types-overview.md) does not take effect for the connections established through the failover group listener endpoint.
+If an instance participates in an [auto-failover group](../database/auto-failover-group-overview.md), changing the instance's [connection type](../managed-instance/connection-types-overview.md) doesn't take effect for the connections established through the failover group listener endpoint.
**Workaround**: Drop and recreate auto-failover group after changing the connection type.
END
### Distributed transactions can be executed after removing managed instance from Server Trust Group
-[Server Trust Groups](../managed-instance/server-trust-group-overview.md) are used to establish trust between managed instances that is prerequisite for executing [distributed transactions](../database/elastic-transactions-overview.md). After removing managed instance from Server Trust Group or deleting the group, you still might be able to execute distributed transactions. There is a workaround you can apply to be sure that distributed transactions are disabled and that is [user-initiated manual failover](../managed-instance/user-initiated-failover.md) on managed instance.
+[Server Trust Groups](../managed-instance/server-trust-group-overview.md) are used to establish trust between managed instances that is prerequisite for executing [distributed transactions](../database/elastic-transactions-overview.md). After removing managed instance from Server Trust Group or deleting the group, you still might be able to execute distributed transactions. There's a workaround you can apply to be sure that distributed transactions are disabled and that is [user-initiated manual failover](../managed-instance/user-initiated-failover.md) on managed instance.
### Distributed transactions cannot be executed after managed instance scaling operation
SQL Managed Instance scaling operations that include changing service tier or nu
### Cannot create SQL Managed Instance with the same name as logical server previously deleted
-A DNS record of `<name>.database.windows.com` is created when you create a [logical server in Azure](../database/logical-servers.md) for Azure SQL Database, and when you create a SQL Managed Instance. The DNS record must be unique. As such, if you create a logical server for SQL Database and then delete it, there is a threshold period of 7 days before the name is released from the records. In that period, a SQL Managed Instance cannot be created with the same name as the deleted logical server. As a workaround, use a different name for the SQL Managed Instance, or create a support ticket to release the logical server name.
+A DNS record of `<name>.database.windows.com` is created when you create a [logical server in Azure](../database/logical-servers.md) for Azure SQL Database, and when you create a SQL Managed Instance. The DNS record must be unique. As such, if you create a logical server for SQL Database and then delete it, there's a threshold period of 7 days before the name is released from the records. In that period, a SQL Managed Instance cannot be created with the same name as the deleted logical server. As a workaround, use a different name for the SQL Managed Instance, or create a support ticket to release the logical server name.
### Service Principal cannot access Azure AD and AKV In some circumstances, there might exist an issue with Service Principal used to access Azure AD and Azure Key Vault (AKV) services. As a result, this issue impacts usage of Azure AD authentication and Transparent Database Encryption (TDE) with SQL Managed Instance. This might be experienced as an intermittent connectivity issue, or not being able to run statements such are `CREATE LOGIN/USER FROM EXTERNAL PROVIDER` or `EXECUTE AS LOGIN/USER`. Setting up TDE with customer-managed key on a new Azure SQL Managed Instance might also not work in some circumstances.
-**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin page](../database/authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal". In case you have encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
+**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin page](../database/authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal". In case you've encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
### Limitation of manual failover via portal for failover groups
_Active Directory admin_ blade of Azure portal for Azure SQL Managed Instance ma
"Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal"
-You can neglect this error message if Service Principal for the managed instance already exists, and/or AAD authentication on the managed instance works.
+You can neglect this error message if Service Principal for the managed instance already exists, and/or Azure Active Directory authentication on the managed instance works.
To check whether Service Principal exists, navigate to the _Enterprise applications_ page on the Azure portal, choose _Managed Identities_ from the _Application type_ dropdown list, select _Apply_ and type the name of the managed instance in the search box. If the instance name shows up in the result list, Service Principal already exists and no further actions are needed.
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| Australia East | Yes | Yes | | Canada Central | Yes | | | Canada East | Yes | |
-| East US | | Yes |
| East US 2 | Yes | | | France Central | | Yes | | Germany West Central | | Yes | | Japan East | Yes | | | Korea Central | Yes | |
-| North Central US | Yes | Yes |
+| North Central US | Yes | |
| North Europe | Yes | | | South Central US | Yes | Yes | | Southeast Asia | Yes | |
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
Previously updated : 01/18/2022 Last updated : 01/26/2022 # Use CLI to create an Azure SQL Managed Instance
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
Previously updated : 01/18/2022 Last updated : 01/26/2022 # Use CLI to restore a Managed Instance database to another geo-region
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
Previously updated : 01/18/2022 Last updated : 01/26/2022 # Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
backup Backup Create Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-create-rs-vault.md
Before you begin, consider the following information:
- Using Cross Region Restore will incur additional charges. [Learn more](https://azure.microsoft.com/pricing/details/backup/). - After you opt in, it might take up to 48 hours for the backup items to be available in secondary regions. - Cross Region Restore currently can't be reverted to GRS or LRS after the protection starts for the first time.-- Currently, the recovery point objective for a secondary region is up to 12 hours from the primary region, even though [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) replication is 15 minutes.
+- Currently, secondary region RPO is 36 hours. This is because the RPO in the primary region is 24 hours and can take up to 12 hours to replicate the backup data from the primary to the secondary region.
A vault created with GRS redundancy includes the option to configure the Cross Region Restore feature. Every GRS vault has a banner that links to the documentation.
backup Backup Release Notes Archived https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-release-notes-archived.md
+
+ Title: Azure Backup release notes - Archive
+description: Learn about past features releases in Azure Backup.
+ Last updated : 01/27/2022+++++
+# Archived release notes in Azure Backup
+
+This article lists all the past releases of features and improvements from Azure Backup. For more recent and up to date releases, see [What's new in Azure Backup](whats-new.md).
+
+## Release summary
+
+- January 2021
+ - [Azure Disk Backup (in preview)](#azure-disk-backup-in-preview)
+ - [Encryption at rest using customer-managed keys (general availability)](#encryption-at-rest-using-customer-managed-keys)
+- November 2020
+ - [Azure Resource Manager template for Azure file share (AFS) backup](#azure-resource-manager-template-for-afs-backup)
+ - [Incremental backups for SAP HANA databases on Azure VMs (in preview)](#incremental-backups-for-sap-hana-databases-in-preview)
+- September 2020
+ - [Backup Center (in preview)](#backup-center-in-preview)
+ - [Back up Azure Database for PostgreSQL (in preview)](#back-up-azure-database-for-postgresql-in-preview)
+ - [Selective disk backup and restore](#selective-disk-backup-and-restore)
+ - [Cross Region Restore for SQL Server and SAP HANA databases on Azure VMs (in preview)](#cross-region-restore-for-sql-server-and-sap-hana-in-preview)
+ - [Support for backup of VMs with up to 32 disks (general availability)](#support-for-backup-of-vms-with-up-to-32-disks)
+ - [Simplified backup configuration experience for SQL in Azure VMs](#simpler-backup-configuration-for-sql-in-azure-vms)
+ - [Back up SAP HANA in RHEL Azure Virtual Machines (in preview)](#back-up-sap-hana-in-rhel-azure-virtual-machines-in-preview)
+ - [Zone redundant storage (ZRS) for backup data (in preview)](#zone-redundant-storage-zrs-for-backup-data-in-preview)
+ - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
+
+## Azure Disk Backup (in preview)
+
+Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for [Azure Managed Disks](../virtual-machines/managed-disks-overview.md) by automating periodic creation of snapshots and retaining it for a configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
+
+For more information, see [Azure Disk Backup (in preview)](disk-backup-overview.md).
+
+## Encryption at rest using customer-managed keys
+
+Support for encryption at rest using customer-managed keys is now generally available. This gives you the ability to encrypt the backup data in your Recovery Services vaults using your own keys stored in Azure Key Vaults. The encryption key used for encrypting backups in the Recovery Services vault may be different from the ones used for encrypting the source. The data is protected using an AES 256 based data encryption key (DEK), which is, in turn, protected using your keys stored in the Key Vault. Compared to encryption using platform-managed keys (which is available by default), this gives you more control over your keys and can help you better meet your compliance needs.
+
+For more information, see [Encryption of backup data using customer-managed keys](encryption-at-rest-with-cmk.md).
+
+## Azure Resource Manager template for AFS backup
+
+Azure Backup now supports configuring backup for existing Azure file shares using an Azure Resource Manager (ARM) template. The template configures protection for an existing Azure file share by specifying appropriate details for the Recovery Services vault and backup policy. It optionally creates a new Recovery Services vault and backup policy, and registers the storage account containing the file share to the Recovery Services vault.
+
+For more information, see [Azure Resource Manager templates for Azure Backup](backup-rm-template-samples.md).
+
+## Incremental backups for SAP HANA databases (in preview)
+
+Azure Backup now supports incremental backups for SAP HANA databases hosted on Azure VMs. This allows for faster and more cost-efficient backups of your SAP HANA data.
+
+For more information, see [various options available during creation of a backup policy](./sap-hana-faq-backup-azure-vm.yml) and [how to create a backup policy for SAP HANA databases](tutorial-backup-sap-hana-db.md#creating-a-backup-policy).
+
+## Backup Center (in preview)
+
+Azure Backup has enabled a new native management capability to manage your entire backup estate from a central console. Backup Center provides you with the capability to monitor, operate, govern, and optimize data protection at scale in a unified manner consistent with AzureΓÇÖs native management experiences.
+
+With Backup Center, you get an aggregated view of your inventory across subscriptions, locations, resource groups, vaults, and even tenants using Azure Lighthouse. Backup Center is also an action center from where you can trigger your backup related activities, such as configuring backup, restore, creation of policies or vaults, all from a single place. In addition, with seamless integration to Azure Policy, you can now govern your environment and track compliance from a backup perspective. In-built Azure Policies specific to Azure Backup also allow you to configure backups at scale.
+
+For more information, see [Overview of Backup Center](backup-center-overview.md).
+
+## Back up Azure Database for PostgreSQL (in preview)
+
+Azure Backup and Azure Database Services have come together to build an enterprise-class backup solution for Azure PostgreSQL (now in preview). Now you can meet your data protection and compliance needs with a customer-controlled backup policy that enables retention of backups for up to 10 years. With this, you have granular control to manage the backup and restore operations at the individual database level. Likewise, you can restore across PostgreSQL versions or to blob storage with ease.
+
+For more information, see [Azure Database for PostgreSQL backup](backup-azure-database-postgresql.md).
+
+## Selective disk backup and restore
+
+Azure Backup supports backing up all the disks (operating system and data) in a VM together using the virtual machine backup solution. Now, using the selective disks backup and restore functionality, you can back up a subset of the data disks in a VM. This provides an efficient and cost-effective solution for your backup and restore needs. Each recovery point contains only the disks that are included in the backup operation.
+
+For more information, see [Selective disk backup and restore for Azure virtual machines](selective-disk-backup-restore.md).
+
+## Cross Region Restore for SQL Server and SAP HANA (in preview)
+
+With the introduction of cross-region restore, you can now initiate restores in a secondary region at will to mitigate real downtime issues in a primary region for your environment. This makes the secondary region restores completely customer controlled. Azure Backup uses the backed-up data replicated to the secondary region for such restores.
+
+Now, in addition to support for cross-region restore for Azure virtual machines, the feature has been extended to restoring SQL and SAP HANA databases in Azure virtual machines as well.
+
+For more information, see [Cross Region Restore for SQL databases](restore-sql-database-azure-vm.md#cross-region-restore) and [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore).
+
+## Support for backup of VMs with up to 32 disks
+
+Until now, Azure Backup has supported 16 managed disks per VM. Now, Azure Backup supports backup of up to 32 managed disks per VM.
+
+For more information, see the [VM storage support matrix](backup-support-matrix-iaas.md#vm-storage-support).
+
+## Simpler backup configuration for SQL in Azure VMs
+
+Configuring backups for your SQL Server in Azure VMs is now even easier with inline backup configuration integrated into the VM pane of the Azure portal. In just a few steps, you can enable backup of your SQL Server to protect all the existing databases as well as the ones that get added in the future.
+
+For more information, see [Back up a SQL Server from the VM pane](backup-sql-server-vm-from-vm-pane.md).
+
+## Back up SAP HANA in RHEL Azure virtual machines (in preview)
+
+Azure Backup is the native backup solution for Azure and is BackInt certified by SAP. Azure Backup has now added support for Red Hat Enterprise Linux (RHEL), one of the most widely used Linux operating systems running SAP HANA.
+
+For more information, see the [SAP HANA database backup scenario support matrix](sap-hana-backup-support-matrix.md#scenario-support).
+
+## Zone redundant storage (ZRS) for backup data (in preview)
+
+Azure Storage provides a great balance of high performance, high availability, and high data resiliency with its varied redundancy options. Azure Backup allows you to extend these benefits to the backup data as well, with options to store your backups in locally redundant storage (LRS) and geo-redundant storage (GRS). Now, there are additional durability options with the added support for zone redundant storage (ZRS).
+
+For more information, see [Set storage redundancy for the Recovery Services vault](backup-create-rs-vault.md#set-storage-redundancy).
+
+## Soft delete for SQL Server and SAP HANA workloads
+
+Concerns about security issues, like malware, ransomware, and intrusion, are increasing. These security issues can be costly, in terms of both money and data. To guard against such attacks, Azure Backup provides security features to help protect backup data even after deletion.
+
+One such feature is soft delete. With soft delete, even if a malicious actor deletes a backup (or backup data is accidentally deleted), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days of retention for backup data in the "soft delete" state don't incur any cost to you.
+
+Now, in addition to soft delete support for Azure VMs, SQL Server and SAP HANA workloads in Azure VMs are also protected with soft delete.
+
+For more information, see [Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads](soft-delete-sql-saphana-in-azure-vm.md).
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-configure-manage.md
To assign the required role for storage accounts that you need to protect, follo
![Role assignment options](./media/blob-backup-configure-manage/role-assignment-options.png) >[!NOTE]
- >The role assignment might take up to 10 minutes to take effect.
+ >The role assignment might take up to 30 minutes to take effect.
## Create a backup policy
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 12/02/2021 Last updated : 01/27/2022
You can learn more about the new releases by bookmarking this page or by [subscr
- [Archive Tier support for Azure Backup (in preview)](#archive-tier-support-for-azure-backup-in-preview) - February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)-- January 2021
- - [Azure Disk Backup (in preview)](#azure-disk-backup-in-preview)
- - [Encryption at rest using customer-managed keys (general availability)](#encryption-at-rest-using-customer-managed-keys)
-- November 2020
- - [Azure Resource Manager template for Azure file share (AFS) backup](#azure-resource-manager-template-for-afs-backup)
- - [Incremental backups for SAP HANA databases on Azure VMs (in preview)](#incremental-backups-for-sap-hana-databases-in-preview)
-- September 2020
- - [Backup Center (in preview)](#backup-center-in-preview)
- - [Back up Azure Database for PostgreSQL (in preview)](#back-up-azure-database-for-postgresql-in-preview)
- - [Selective disk backup and restore](#selective-disk-backup-and-restore)
- - [Cross Region Restore for SQL Server and SAP HANA databases on Azure VMs (in preview)](#cross-region-restore-for-sql-server-and-sap-hana-in-preview)
- - [Support for backup of VMs with up to 32 disks (general availability)](#support-for-backup-of-vms-with-up-to-32-disks)
- - [Simplified backup configuration experience for SQL in Azure VMs](#simpler-backup-configuration-for-sql-in-azure-vms)
- - [Back up SAP HANA in RHEL Azure Virtual Machines (in preview)](#back-up-sap-hana-in-rhel-azure-virtual-machines-in-preview)
- - [Zone redundant storage (ZRS) for backup data (in preview)](#zone-redundant-storage-zrs-for-backup-data-in-preview)
- - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
## Archive Tier support for SQL Server/ SAP HANA in Azure VM from Azure portal
Operational backup for blobs integrates with the Azure Backup management tools,
For more information, see [Overview of operational backup for Azure Blobs](blob-backup-overview.md).
+## Enhancements to encryption using customer-managed keys for Azure Backup (in preview)
+
+Azure Backup now provides enhanced capabilities (in preview) to manage encryption with customer-managed keys. Azure Backup allows you to bring in your own keys to encrypt the backup data in the Recovery Services vaults, thus providing you a better control.
+
+- Supports user-assigned managed identities to grant permissions to the keys to manage data encryption in the Recovery Services vault.
+- Enables encryption with customer-managed keys while creating a Recovery Services vault.
+ >[!NOTE]
+ >This feature is currently in limited preview. To sign up, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0H3_nezt2RNkpBCUTbWEapURDNTVVhGOUxXSVBZMEwxUU5FNDkyQkU4Ny4u), and write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
+- Allows you to use Azure Policies to audit and enforce encryption using customer-managed keys.
+>[!NOTE]
+>- The above capabilities are supported through the Azure portal only, PowerShell is currently not supported.<br>If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you canΓÇÖt use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
+>- You can use the audit policy for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021.
+>- For vaults with the CMK encryption enabled before this date, the policy might fail to apply, or might show false negative results (that is, these vaults may be reported as non-compliant, despite having CMK encryption enabled). [Learn more](encryption-at-rest-with-cmk.md#use-azure-policies-to-audit-and-enforce-encryption-with-customer-managed-keys-in-preview).
+
+For more information, see [Encryption for Azure Backup using customer-managed keys](encryption-at-rest-with-cmk.md).
+ ## Azure Disk Backup is now generally available Azure Backup offers snapshot lifecycle management to Azure Managed Disks by automating periodic creation of snapshots and retaining these for configured durations using Backup policy.
Operational backup for Blobs integrates with Backup Center, among other Backup m
For more information, see [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md).
-## Azure Disk Backup (in preview)
-
-Azure Disk Backup offers a turnkey solution that provides snapshot lifecycle management for [Azure Managed Disks](../virtual-machines/managed-disks-overview.md) by automating periodic creation of snapshots and retaining it for a configured duration using backup policy. You can manage the disk snapshots with zero infrastructure cost and without the need for custom scripting or any management overhead. This is a crash-consistent backup solution that takes point-in-time backup of a managed disk using [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) with support for multiple backups per day. It's also an agent-less solution and doesn't impact production application performance. It supports backup and restore of both OS and data disks (including shared disks), whether or not they're currently attached to a running Azure virtual machine.
-
-For more information, see [Azure Disk Backup (in preview)](disk-backup-overview.md).
-
-## Encryption at rest using customer-managed keys
-
-Support for encryption at rest using customer-managed keys is now generally available. This gives you the ability to encrypt the backup data in your Recovery Services vaults using your own keys stored in Azure Key Vaults. The encryption key used for encrypting backups in the Recovery Services vault may be different from the ones used for encrypting the source. The data is protected using an AES 256 based data encryption key (DEK), which is, in turn, protected using your keys stored in the Key Vault. Compared to encryption using platform-managed keys (which is available by default), this gives you more control over your keys and can help you better meet your compliance needs.
-
-For more information, see [Encryption of backup data using customer-managed keys](encryption-at-rest-with-cmk.md).
-
-## Azure Resource Manager template for AFS backup
-
-Azure Backup now supports configuring backup for existing Azure file shares using an Azure Resource Manager (ARM) template. The template configures protection for an existing Azure file share by specifying appropriate details for the Recovery Services vault and backup policy. It optionally creates a new Recovery Services vault and backup policy, and registers the storage account containing the file share to the Recovery Services vault.
-
-For more information, see [Azure Resource Manager templates for Azure Backup](backup-rm-template-samples.md).
-
-## Incremental backups for SAP HANA databases (in preview)
-
-Azure Backup now supports incremental backups for SAP HANA databases hosted on Azure VMs. This allows for faster and more cost-efficient backups of your SAP HANA data.
-
-For more information, see [various options available during creation of a backup policy](./sap-hana-faq-backup-azure-vm.yml) and [how to create a backup policy for SAP HANA databases](tutorial-backup-sap-hana-db.md#creating-a-backup-policy).
-
-## Backup Center (in preview)
-
-Azure Backup has enabled a new native management capability to manage your entire backup estate from a central console. Backup Center provides you with the capability to monitor, operate, govern, and optimize data protection at scale in a unified manner consistent with AzureΓÇÖs native management experiences.
-
-With Backup Center, you get an aggregated view of your inventory across subscriptions, locations, resource groups, vaults, and even tenants using Azure Lighthouse. Backup Center is also an action center from where you can trigger your backup related activities, such as configuring backup, restore, creation of policies or vaults, all from a single place. In addition, with seamless integration to Azure Policy, you can now govern your environment and track compliance from a backup perspective. In-built Azure Policies specific to Azure Backup also allow you to configure backups at scale.
-
-For more information, see [Overview of Backup Center](backup-center-overview.md).
-
-## Back up Azure Database for PostgreSQL (in preview)
-
-Azure Backup and Azure Database Services have come together to build an enterprise-class backup solution for Azure PostgreSQL (now in preview). Now you can meet your data protection and compliance needs with a customer-controlled backup policy that enables retention of backups for up to 10 years. With this, you have granular control to manage the backup and restore operations at the individual database level. Likewise, you can restore across PostgreSQL versions or to blob storage with ease.
-
-For more information, see [Azure Database for PostgreSQL backup](backup-azure-database-postgresql.md).
-
-## Selective disk backup and restore
-
-Azure Backup supports backing up all the disks (operating system and data) in a VM together using the virtual machine backup solution. Now, using the selective disks backup and restore functionality, you can back up a subset of the data disks in a VM. This provides an efficient and cost-effective solution for your backup and restore needs. Each recovery point contains only the disks that are included in the backup operation.
-
-For more information, see [Selective disk backup and restore for Azure virtual machines](selective-disk-backup-restore.md).
-
-## Cross Region Restore for SQL Server and SAP HANA (in preview)
-
-With the introduction of cross-region restore, you can now initiate restores in a secondary region at will to mitigate real downtime issues in a primary region for your environment. This makes the secondary region restores completely customer controlled. Azure Backup uses the backed-up data replicated to the secondary region for such restores.
-
-Now, in addition to support for cross-region restore for Azure virtual machines, the feature has been extended to restoring SQL and SAP HANA databases in Azure virtual machines as well.
-
-For more information, see [Cross Region Restore for SQL databases](restore-sql-database-azure-vm.md#cross-region-restore) and [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore).
-
-## Support for backup of VMs with up to 32 disks
-
-Until now, Azure Backup has supported 16 managed disks per VM. Now, Azure Backup supports backup of up to 32 managed disks per VM.
-
-For more information, see the [VM storage support matrix](backup-support-matrix-iaas.md#vm-storage-support).
-
-## Simpler backup configuration for SQL in Azure VMs
-
-Configuring backups for your SQL Server in Azure VMs is now even easier with inline backup configuration integrated into the VM pane of the Azure portal. In just a few steps, you can enable backup of your SQL Server to protect all the existing databases as well as the ones that get added in the future.
-
-For more information, see [Back up a SQL Server from the VM pane](backup-sql-server-vm-from-vm-pane.md).
-
-## Back up SAP HANA in RHEL Azure virtual machines (in preview)
-
-Azure Backup is the native backup solution for Azure and is BackInt certified by SAP. Azure Backup has now added support for Red Hat Enterprise Linux (RHEL), one of the most widely used Linux operating systems running SAP HANA.
-
-For more information, see the [SAP HANA database backup scenario support matrix](sap-hana-backup-support-matrix.md#scenario-support).
-
-## Zone redundant storage (ZRS) for backup data (in preview)
-
-Azure Storage provides a great balance of high performance, high availability, and high data resiliency with its varied redundancy options. Azure Backup allows you to extend these benefits to the backup data as well, with options to store your backups in locally redundant storage (LRS) and geo-redundant storage (GRS). Now, there are additional durability options with the added support for zone redundant storage (ZRS).
-
-For more information, see [Set storage redundancy for the Recovery Services vault](backup-create-rs-vault.md#set-storage-redundancy).
-
-## Soft delete for SQL Server and SAP HANA workloads
-
-Concerns about security issues, like malware, ransomware, and intrusion, are increasing. These security issues can be costly, in terms of both money and data. To guard against such attacks, Azure Backup provides security features to help protect backup data even after deletion.
-
-One such feature is soft delete. With soft delete, even if a malicious actor deletes a backup (or backup data is accidentally deleted), the backup data is retained for 14 additional days, allowing the recovery of that backup item with no data loss. The additional 14 days of retention for backup data in the "soft delete" state don't incur any cost to you.
-
-Now, in addition to soft delete support for Azure VMs, SQL Server and SAP HANA workloads in Azure VMs are also protected with soft delete.
-
-For more information, see [Soft delete for SQL server in Azure VM and SAP HANA in Azure VM workloads](soft-delete-sql-saphana-in-azure-vm.md).
-
-## Enhancements to encryption using customer-managed keys for Azure Backup (in preview)
-
-Azure Backup now provides enhanced capabilities (in preview) to manage encryption with customer-managed keys. Azure Backup allows you to bring in your own keys to encrypt the backup data in the Recovery Services vaults, thus providing you a better control.
--- Supports user-assigned managed identities to grant permissions to the keys to manage data encryption in the Recovery Services vault.-- Enables encryption with customer-managed keys while creating a Recovery Services vault.
- >[!NOTE]
- >This feature is currently in limited preview. To sign up, fill [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR0H3_nezt2RNkpBCUTbWEapURDNTVVhGOUxXSVBZMEwxUU5FNDkyQkU4Ny4u), and write to us at [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com).
-- Allows you to use Azure Policies to audit and enforce encryption using customer-managed keys.
->[!NOTE]
->- The above capabilities are supported through the Azure portal only, PowerShell is currently not supported.<br>If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you canΓÇÖt use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
->- You can use the audit policy for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021.
->- For vaults with the CMK encryption enabled before this date, the policy might fail to apply, or might show false negative results (that is, these vaults may be reported as non-compliant, despite having CMK encryption enabled). [Learn more](encryption-at-rest-with-cmk.md#use-azure-policies-to-audit-and-enforce-encryption-with-customer-managed-keys-in-preview).
-
-For more information, see [Encryption for Azure Backup using customer-managed keys](encryption-at-rest-with-cmk.md).
- ## Next steps -- [Azure Backup guidance and best practices](guidance-best-practices.md)
+- [Azure Backup guidance and best practices](guidance-best-practices.md)
+- [Archived release notes](backup-release-notes-archived.md)
cognitive-services Anomaly Detector Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/anomaly-detector-container-configuration.md
This setting can be found in the following place:
|Required| Name | Data type | Description | |--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](anomaly-detector-container-howto.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
+|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
## Eula setting
Replace value in brackets, `{}`, with your own values:
| Placeholder | Value | Format or example | |-|-|| | **{API_KEY}** | The endpoint key of the `Anomaly Detector` resource on the Azure `Anomaly Detector` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Anomaly Detector` Overview page.| See [gathering required parameters](anomaly-detector-container-howto.md#gathering-required-parameters) for explicit examples. |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Anomaly Detector` Overview page.| See [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters) for explicit examples. |
[!INCLUDE [subdomains-note](../../../includes/cognitive-services-custom-subdomains-note.md)]
cognitive-services Anomaly Detector Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/anomaly-detector-container-howto.md
Once the container is on the [host computer](#the-host-computer), use the follow
## Run the container with `docker run`
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
[Examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Once the container is on the [host computer](#the-host-computer), use the follow
## Run the container with `docker run`
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
cognitive-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-resource-container-config.md
Replace {_argument_name_} with your own values:
| Placeholder | Value | Format or example | |-|-|| | **{API_KEY}** | The endpoint key of the `Computer Vision` resource on the Azure `Computer Vision` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Computer Vision` Overview page.| See [gathering required parameters](computer-vision-how-to-install-containers.md#gathering-required-parameters) for explicit examples. |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Computer Vision` Overview page.| See [gather required parameters](computer-vision-how-to-install-containers.md#gather-required-parameters) for explicit examples. |
[!INCLUDE [subdomains-note](../../../includes/cognitive-services-custom-subdomains-note.md)]
cognitive-services Luis Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-container-configuration.md
This setting can be found in the following places:
| Required | Name | Data type | Description | |-||--|-|
-| Yes | `Billing` | string | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](luis-container-howto.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
+| Yes | `Billing` | string | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](luis-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
## Eula setting
Replace {_argument_name_} with your own values:
| Placeholder | Value | Format or example | |-|-|| | **{API_KEY}** | The endpoint key of the `LUIS` resource on the Azure `LUIS` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `LUIS` Overview page.| See [gathering required parameters](luis-container-howto.md#gathering-required-parameters) for explicit examples. |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `LUIS` Overview page.| See [gather required parameters](luis-container-howto.md#gather-required-parameters) for explicit examples. |
[!INCLUDE [subdomains-note](../../../includes/cognitive-services-custom-subdomains-note.md)]
cognitive-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-container-howto.md
To download the versioned package, refer to the [API documentation here][downloa
## Run the container with `docker run`
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values.
[Examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/batch-transcription.md
Title: How to use batch transcription - Speech service
-description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure Blobs. By using the dedicated REST API, you can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcriptions.
+description: Batch transcription is ideal if you want to transcribe a large quantity of audio in storage, such as Azure blobs. By using the dedicated REST API, you can point to audio files with a shared access signature (SAS) URI, and asynchronously receive transcriptions.
# How to use batch transcription
-Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files using a typical URI or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI and asynchronously receive transcription results. With the v3.0 API, you can transcribe one or more audio files, or process a whole storage container.
+Batch transcription is a set of REST API operations that enables you to transcribe a large amount of audio in storage. You can point to audio files by using a typical URI or a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md) URI, and asynchronously receive transcription results. With the v3.0 API, you can transcribe one or more audio files, or process a whole storage container.
You can use batch transcription REST APIs to call the following methods:
-| Batch Transcription Operation | Method | REST API Call |
+| Batch transcription operation | Method | REST API call |
||--|-| | Creates a new transcription. | POST | speechtotext/v3.0/transcriptions | | Retrieves a list of transcriptions for the authenticated subscription. | GET | speechtotext/v3.0/transcriptions | | Gets a list of supported locales for offline transcriptions. | GET | speechtotext/v3.0/transcriptions/locales | | Updates the mutable details of the transcription identified by its ID. | PATCH | speechtotext/v3.0/transcriptions/{id} | | Deletes the specified transcription task. | DELETE | speechtotext/v3.0/transcriptions/{id} |
-| Gets the transcription identified by the given ID. | GET | speechtotext/v3.0/transcriptions/{id} |
-| Gets the result files of the transcription identified by the given ID. | GET | speechtotext/v3.0/transcriptions/{id}/files |
+| Gets the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id} |
+| Gets the result files of the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id}/files |
You can review and test the detailed API, which is available as a [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0).
-Batch transcription jobs are scheduled on a best effort basis.
-You cannot estimate when a job will change into the running state,
-but it should happen within minutes under normal system load.
-Once in the running state, the transcription occurs faster than the audio runtime playback speed.
+Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
## Prerequisites As with all features of the Speech service, you create a subscription key from the [Azure portal](https://portal.azure.com) by following our [Get started guide](overview.md#try-the-speech-service-for-free). >[!NOTE]
-> A standard subscription (S0) for Speech service is required to use batch transcription. Free subscription keys (F0) will not work. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> To use batch transcription, you need a standard subscription (S0) for Speech service. Free subscription keys (F0) don't work. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (`self` property). A deployed custom endpoint is *not needed* for the batch transcription service.
+If you plan to customize models, follow the steps in [Acoustic customization](./how-to-custom-speech-train-model.md) and [Language customization](./how-to-custom-speech-train-model.md). To use the created models in batch transcription, you need their model location. You can retrieve the model location when you inspect the details of the model (the `self` property). A deployed custom endpoint is *not needed* for the batch transcription service.
>[!NOTE]
-> As a part of the REST API, Batch Transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription), which we encourage to review. To take the full advantage of Batch Transcription ability to efficiently transcribe a large number of audio files we recommend always sending multiple files per request or pointing to a Blob Storage container with the audio files to transcribe. The service will transcribe the files concurrently reducing the turnaround time. Using multiple files in a single request is very simple and straightforward - see [Configuration](#configuration) section.
+> As a part of the REST API, batch transcription has a set of [quotas and limits](speech-services-quotas-and-limits.md#batch-transcription). It's a good idea to review these. To take full advantage of the ability to efficiently transcribe a large number of audio files, send multiple files per request or point to an Azure Blob Storage container with the audio files to transcribe. The service transcribes the files concurrently, which reduces the turnaround time. For more information, see the [Configuration](#configuration) section of this article.
## Batch transcription API The batch transcription API supports the following formats:
-| Format | Codec | Bits Per Sample | Sample Rate |
+| Format | Codec | Bits per sample | Sample rate |
|--|-||| | WAV | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo | | MP3 | PCM | 16-bit | 8 kHz or 16 kHz, mono or stereo | | OGG | OPUS | 16-bit | 8 kHz or 16 kHz, mono or stereo |
-For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is being created for each channel.
-To create an ordered final transcript, use the timestamps generated per utterance.
+For stereo audio streams, the left and right channels are split during the transcription. A JSON result file is created for each channel. To create an ordered final transcript, use the timestamps that are generated per utterance.
### Configuration
-Configuration parameters are provided as JSON.
+Configuration parameters are provided as JSON. You can transcribe one or more individual files, process a whole storage container, and use a custom trained model in a batch transcription.
-**Transcribing one or more individual files.** If you have more than one file to transcribe, we recommend sending multiple files in one request. The example below is using three files:
+If you have more than one file to transcribe, it's a good idea to send multiple files in one request. The following example uses three files:
```json {
Configuration parameters are provided as JSON.
} ```
-**Processing a whole storage container.** Container [SAS](../../storage/common/storage-sas-overview.md) should contain `r` (read) and `l` (list) permissions:
+To process a whole storage container, you can make the following configurations. Container [SAS](../../storage/common/storage-sas-overview.md) should contain `r` (read) and `l` (list) permissions:
```json {
Configuration parameters are provided as JSON.
} ```
-**Use a custom trained model in a batch transcription.** The example is using three files:
+Here's an example of using a custom trained model in a batch transcription. This example uses three files:
```json {
Configuration parameters are provided as JSON.
} ``` - ### Configuration properties Use these optional properties to configure transcription:
Use these optional properties to configure transcription:
`profanityFilterMode` :::column-end::: :::column span="2":::
- Optional, defaults to `Masked`. Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add "profanity" tags.
+ Optional, defaults to `Masked`. Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags.
:::row-end::: :::row::: :::column span="1":::
Use these optional properties to configure transcription:
`diarizationEnabled` :::column-end::: :::column span="2":::
- Optional, `false` by default. Specifies that diarization analysis should be carried out on the input, which is expected to be mono channel containing two voices. Note: Requires `wordLevelTimestampsEnabled` to be set to `true`.
+ Optional, `false` by default. Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. Requires `wordLevelTimestampsEnabled` to be set to `true`.
:::row-end::: :::row::: :::column span="1"::: `channels` :::column-end::: :::column span="2":::
- Optional, `0` and `1` transcribed by default. An array of channel numbers to process. Here a subset of the available channels in the audio file can be specified to be processed (for example `0` only).
+ Optional, `0` and `1` transcribed by default. An array of channel numbers to process. Here, a subset of the available channels in the audio file can be specified to be processed (for example `0` only).
:::row-end::: :::row::: :::column span="1"::: `timeToLive` :::column-end::: :::column span="2":::
- Optional, no deletion by default. A duration to automatically delete transcriptions after completing the transcription. The `timeToLive` is useful in mass processing transcriptions to ensure they will be eventually deleted (for example, `PT12H` for 12 hours).
+ Optional, no deletion by default. A duration to automatically delete transcriptions after completing the transcription. The `timeToLive` is useful in mass processing transcriptions to ensure they will be eventually deleted (for example, `PT12H` for 12 hours).
:::row-end::: :::row::: :::column span="1"::: `destinationContainerUrl` :::column-end::: :::column span="2":::
- Optional URL with [ad hoc SAS](../../storage/common/storage-sas-overview.md) to a writeable container in Azure. The result is stored in this container. SAS with stored access policy are **not** supported. When not specified, Microsoft stores the results in a storage container managed by Microsoft. When the transcription is deleted by calling [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription), the result data will also be deleted.
+ Optional URL with [ad hoc SAS](../../storage/common/storage-sas-overview.md) to a writeable container in Azure. The result is stored in this container. SAS with stored access policies isn't supported. If you don't specify a container, Microsoft stores the results in a storage container managed by Microsoft. When the transcription is deleted by calling [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription), the result data is also deleted.
:::row-end::: ### Storage Batch transcription can read audio from a public-visible internet URI,
-and can read audio or write transcriptions using a SAS URI with [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md).
+and can read audio or write transcriptions by using a SAS URI with [Blob Storage](../../storage/blobs/storage-blobs-overview.md).
## Batch transcription result
-For each audio input, one transcription result file is created.
-The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation
-returns a list of result files for this transcription.
-To find the transcription file for a specific input file,
-filter all returned files with `kind` == `Transcription` and `name` == `{originalInputName.suffix}.json`.
+For each audio input, one transcription result file is created. The [Get transcriptions files](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for this transcription.
+To find the transcription file for a specific input file, filter all returned files with `kind` set to `Transcription`, and `name` set to `{originalInputName.suffix}.json`.
Each transcription result file has this format:
The result contains the following fields:
`itn` :::column-end::: :::column span="2":::
- Inverse-text-normalized form of the recognized text. Abbreviations ("doctor smith" to "dr smith"), phone numbers, and other transformations are applied.
+ The inverse-text-normalized (ITN) form of the recognized text. Abbreviations (for example, "doctor smith" to "dr smith"), phone numbers, and other transformations are applied.
:::row-end::: :::row::: :::column span="1":::
The result contains the following fields:
## Speaker separation (diarization)
-Diarization is the process of separating speakers in a piece of audio. The batch pipeline supports diarization and is capable of recognizing two speakers on mono channel recordings. The feature is not available on stereo recordings.
+*Diarization* is the process of separating speakers in a piece of audio. The batch pipeline supports diarization and is capable of recognizing two speakers on mono channel recordings. The feature isn't available on stereo recordings.
-The output of transcription with diarization enabled contains a `Speaker` entry for each transcribed phrase. If diarization is not used, the `Speaker` property is not present in the JSON output. For diarization we support two voices, so the speakers are identified as `1` or `2`.
+The output of transcription with diarization enabled contains a `Speaker` entry for each transcribed phrase. If diarization isn't used, the `Speaker` property isn't present in the JSON output. For diarization, the speakers are identified as `1` or `2`.
-To request diarization, add set the `diarizationEnabled` property to `true` like the HTTP request shows below.
+To request diarization, set the `diarizationEnabled` property to `true`. Here's an example:
```json {
To request diarization, add set the `diarizationEnabled` property to `true` like
} ```
-Word-level timestamps must be enabled as the parameters in the above request indicate.
+Word-level timestamps must be enabled, as the parameters in this request indicate.
## Best practices
-The batch transcription service can handle large number of submitted transcriptions. You can query the status of your transcriptions
-with [Get transcriptions](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions).
-Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
-regularly from the service once you retrieved the results. Alternatively set `timeToLive` property to ensure eventual
-deletion of the results.
+The batch transcription service can handle a large number of submitted transcriptions. You can query the status of your transcriptions with [Get transcriptions](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptions). Call [Delete transcription](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteTranscription)
+regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
> [!TIP]
-> You can use the [Ingestion Client](ingestion-client.md) tool and resulting solution to process high volume of audio.
+> You can use the [Ingestion Client](ingestion-client.md) tool and resulting solution to process a high volume of audio.
## Sample code
-Complete samples are available in the [GitHub sample repository](https://aka.ms/csspeech/samples) inside the `samples/batch` subdirectory.
+Complete samples are available in the [GitHub sample repository](https://aka.ms/csspeech/samples), inside the `samples/batch` subdirectory.
Update the sample code with your subscription information, service region, URI pointing to the audio file to transcribe, and model location if you're using a custom model. [!code-csharp[Configuration variables for batch transcription](~/samples-cognitive-services-speech-sdk/samples/batch/csharp/batchclient/program.cs#transcriptiondefinition)]
-The sample code sets up the client and submits the transcription request. It then polls for the status information and print details about the transcription progress.
+The sample code sets up the client and submits the transcription request. It then polls for the status information and prints details about the transcription progress.
```csharp // get the status of our transcriptions periodically and log results
while (completed < 1)
For full details about the preceding calls, see our [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0). For the full sample shown here, go to [GitHub](https://aka.ms/csspeech/samples) in the `samples/batch` subdirectory.
-This sample uses an asynchronous setup to post audio and receive transcription status.
-The `PostTranscriptions` method sends the audio file details and the `GetTranscriptions` method receives the states.
-`PostTranscriptions` returns a handle, and `GetTranscriptions` uses it to create a handle to get the transcription status.
+This sample uses an asynchronous setup to post audio and receive transcription status. The `PostTranscriptions` method sends the audio file details, and the `GetTranscriptions` method receives the states. `PostTranscriptions` returns a handle, and `GetTranscriptions` uses it to create a handle to get the transcription status.
This sample code doesn't specify a custom model. The service uses the baseline model for transcribing the file or files. To specify the model, you can pass on the same method the model reference for the custom model.
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Title: "Custom Speech overview - Speech service"
+ Title: Custom Speech overview - Speech service
-description: Custom Speech is a set of online tools that allow you to evaluate and improve the Microsoft speech-to-text accuracy for your applications, tools, and products.
+description: Custom Speech is a set of online tools that allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications, tools, and products.
# What is Custom Speech?
-Custom Speech allows you to evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. Follow the links in this article to start creating a custom speech-to-text experience.
+With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products. Follow the links in this article to start creating a custom speech-to-text experience.
## What's in Custom Speech? Before you can do anything with Custom Speech, you'll need an Azure account and a Speech service subscription. After you have an account, you can prep your data, train and test your models, inspect recognition quality, evaluate accuracy, and ultimately deploy and use the custom speech-to-text model.
-This diagram highlights the pieces that make up the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech). Use the links below to learn more about each step.
+This diagram highlights the pieces that make up the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech).
![Diagram that highlights the components that make up the Custom Speech area of the Speech Studio.](./media/custom-speech/custom-speech-overview.png)
+Here's more information about the sequence of steps that the diagram shows:
+ 1. [Subscribe and create a project](#set-up-your-azure-account). Create an Azure account and subscribe to the Speech service. This unified subscription gives you access to speech-to-text, text-to-speech, speech translation, and the [Speech Studio](https://speech.microsoft.com/customspeech). Then use your Speech service subscription to create your first Custom Speech project. 1. [Upload test data](./how-to-custom-speech-test-and-train.md). Upload test data (audio files) to evaluate the Microsoft speech-to-text offering for your applications, tools, and products. 1. [Inspect recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://speech.microsoft.com/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. For quantitative measurements, see [Inspect data](how-to-custom-speech-inspect-data.md).
-1. [Evaluate and improve accuracy](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The [Speech Studio](https://speech.microsoft.com/customspeech) will provide a *Word Error Rate*, which you can use to determine if additional training is required. If you're satisfied with the accuracy, you can use the Speech service APIs directly. If you want to improve accuracy by a relative average of 5% to 20%, use the **Training** tab in the portal to upload additional training data, like human-labeled transcripts and related text.
+1. [Evaluate and improve accuracy](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The [Speech Studio](https://speech.microsoft.com/customspeech) provides a *word error rate*, which you can use to determine if additional training is required. If you're satisfied with the accuracy, you can use the Speech service APIs directly. If you want to improve accuracy by a relative average of 5 through 20 percent, use the **Training** tab in the portal to upload additional training data, like human-labeled transcripts and related text.
-1. [Train and deploy a model](how-to-custom-speech-train-model.md). Improve the accuracy of your speech-to-text model by providing written transcripts (10 to 1,000 hours) and related text (<200 MB) along with your audio test data. This data helps to train the speech-to-text model. After training, retest. If you're satisfied with the result, you can deploy your model to a custom endpoint.
+1. [Train and deploy a model](how-to-custom-speech-train-model.md). Improve the accuracy of your speech-to-text model by providing written transcripts (from 10 to 1,000 hours), and related text (<200 MB), along with your audio test data. This data helps to train the speech-to-text model. After training, retest. If you're satisfied with the result, you can deploy your model to a custom endpoint.
## Set up your Azure account You need to have an Azure account and Speech service subscription before you can use the [Speech Studio](https://speech.microsoft.com/customspeech) to create a custom model. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
-If you plan to train a custom model with **audio data**, pick one of the following regions that have dedicated hardware available for training. This will reduce the time it takes to train a model and allow you to use more audio for training. In these regions, the Speech service will use up to 20 hours of audio for training; in other regions it will only use up to 8 hours.
+If you plan to train a custom model with audio data, pick one of the following regions that have dedicated hardware available for training. This reduces the time it takes to train a model and allows you to use more audio for training. In these regions, the Speech service will use up to 20 hours of audio for training; in other regions, it will only use up to 8 hours.
* Australia East * Canada Central
If you plan to train a custom model with **audio data**, pick one of the followi
* West Europe * West US 2
-After you create an Azure account and a Speech service subscription, you'll need to sign in to the [Speech Studio](https://speech.microsoft.com/customspeech) and connect your subscription.
+After you create an Azure account and a Speech service subscription, sign in to the [Speech Studio](https://speech.microsoft.com/customspeech), and connect your subscription.
1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
-1. Select the subscription you need to work in and create a speech project.
-1. If you want to modify your subscription, select the cog button in the top menu.
+1. Select the subscription in which you need to work, and create a speech project.
+1. If you want to modify your subscription, select the cog icon in the top menu.
-## How to create a project
+## Create a project
-Content like data, models, tests, and endpoints are organized into *projects* in the [Speech Studio](https://speech.microsoft.com/customspeech). Each project is specific to a domain and country/language. For example, you might create a project for call centers that use English in the United States.
+Content like data, models, tests, and endpoints are organized into *projects* in the [Speech Studio](https://speech.microsoft.com/customspeech). Each project is specific to a domain and country or language. For example, you might create a project for call centers that use English in the United States.
To create your first project, select **Speech-to-text/Custom speech**, and then select **New Project**. Follow the instructions provided by the wizard to create your project. After you create a project, you should see four tabs: **Data**, **Testing**, **Training**, and **Deployment**. Use the links provided in [Next steps](#next-steps) to learn how to use each tab.
-## Model and Endpoint lifecycle
+## Model and endpoint lifecycle
-Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after 1 year for adaptation and 2 years for decoding. See a detailed description in the [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md) article.
+Older models typically become less useful over time because the newest model usually has higher accuracy. Therefore, base models as well as custom models and endpoints created through the portal are subject to expiration after one year for adaptation, and two years for decoding. For more information, see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md).
## Next steps
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
# Improve synthesis with the Audio Content Creation tool
-[Audio Content Creation](https://aka.ms/audiocontentcreation) is an easy-to-use and powerful tool that lets you build highly natural audio content for a variety of scenarios, like audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can fine-tune Text-to-Speech voices and design-customized audio experiences in an efficient and low-cost way.
+[Audio Content Creation](https://aka.ms/audiocontentcreation) is an easy-to-use and powerful tool that lets you build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can fine-tune text-to-speech voices and design customized audio experiences in an efficient and low-cost way.
-The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust Text-to-Speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
+The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
-You can have easy access to more than 270 neural voices across 119 different languages as of November, 2021, including the state-of-the-art prebuilt neural voices, and your custom neural voice if you have built one.
+As of November 2021, you have easy access to more than 270 neural voices across 119 different languages. These voices include state-of-the-art prebuilt neural voices and your custom neural voice, if you've built one.
-See the [video tutorial](https://youtu.be/ygApYuOOG6w) for Audio Content Creation.
+To learn more, view the [Audio Content Creation tutorial video](https://youtu.be/ygApYuOOG6w).
-## How to Get Started?
+## Get started
-Audio Content Creation is a free tool, but you will pay for the Azure Speech service you consume. To work with the tool, you need to log in with an Azure account and create a speech resource. For each Azure account, you have free monthly speech quotas which include 0.5 million characters for prebuilt neural voices (referred as *Neural* on [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people. Here are the steps for how to create an Azure account and get a speech resource.
+Audio Content Creation is a free tool, but you'll pay for the Speech service that you consume. To work with the tool, you need to sign in with an Azure account and create a Speech resource. For each Azure account, you have free monthly speech quotas, which include 0.5 million characters for prebuilt neural voices (referred to as *Neural* on the [pricing page](https://aka.ms/speech-pricing)). The monthly allotted amount is usually enough for a small content team of around 3-5 people.
-### Step 1 - Create an Azure account
+The next sections cover how to create an Azure account and get a Speech resource.
-To work with Audio Content Creation, you need to have a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/). Follow these instructions to [set up the account](./overview.md#try-the-speech-service-for-free).
+### Step 1: Create an Azure account
-[Azure portal](https://portal.azure.com/) is the centralized place for you to manage your Azure account. You can create the speech resource, manage the product access, and monitor everything from simple web apps to complex cloud deployments.
+To work with Audio Content Creation, you need a [Microsoft account](https://account.microsoft.com/account) and an [Azure account](https://azure.microsoft.com/free/ai/). To set up the accounts, see the "Try the Speech service for free" section in [What is the Speech service?](./overview.md#try-the-speech-service-for-free).
-### Step 2 - Create a Speech resource
+[The Azure portal](https://portal.azure.com/) is the centralized place for you to manage your Azure account. You can create the Speech resource, manage the product access, and monitor everything from simple web apps to complex cloud deployments.
-After signing up for the Azure account, you need to create a Speech resource under your Azure account to access Speech services. View the instructions for [how to create a Speech resource](./overview.md#create-the-azure-resource).
+### Step 2: Create a Speech resource
-It takes a few moments to deploy your new Speech resource. Once the deployment is complete, you can start the Audio Content Creation journey.
+After you sign up for the Azure account, you need to create a Speech resource in your Azure account to access Speech services. For instructions, see [how to create a Speech resource](./overview.md#create-the-azure-resource).
+
+It takes a few moments to deploy your new Speech resource. After the deployment is complete, you can start using the Audio Content Creation tool.
> [!NOTE] > If you plan to use neural voices, make sure that you create your resource in [a region that supports neural voices](regions.md#prebuilt-neural-voices).
-### Step 3 - Log into the Audio Content Creation with your Azure account and Speech resource
+### Step 3: Sign in to Audio Content Creation with your Azure account and Speech resource
+
+1. After you get the Azure account and the Speech resource, you can sign in to the [Audio Content Creation tool](https://aka.ms/audiocontentcreation) by selecting **Get started**.
+
+1. The home page lists all the products under Speech Studio. To start, select **Audio Content Creation**.
+
+ The **Welcome to Speech Studio** page opens.
+
+1. Select the Azure subscription and the Speech resource you want to work with, and then select **Use resource**.
+
+ The next time you sign in to Audio Content Creation, you're linked directly to the audio work files under the current Speech resource. You can check your Azure subscription details and status in the [Azure portal](https://portal.azure.com/).
+
+ If you don't have an available Speech resource and you're the owner or admin of an Azure subscription, you can create a Speech resource in Speech Studio by selecting **Create a new resource**.
+
+ If you have a user role for a certain Azure subscription, you might not have permissions to create a new Speech resource. To get access, contact your admin.
+
+ To modify your Speech resource at any time, select **Settings** at the top of the page.
+
+ To switch directories, select **Settings** or go to your profile.
+
+## Use the tool
-1. After getting the Azure account and the Speech resource, you can log into [Audio Content Creation](https://aka.ms/audiocontentcreation) by selecting **Get started**.
-2. The home page lists all the products under Speech Studio. Select **Audio Content Creation** to start.
-3. The **Welcome to Speech Studio** page will appear to you to set up the speech service. Select the Azure subscription and the Speech resource you want to work on. Select **Use resource** to complete the settings. When you log into the Audio Content Creation tool for the Next time, we will link you directly to the audio work files under the current speech resource. You can check your Azure subscriptions details and status in [Azure portal](https://portal.azure.com/). If you do not have available speech resource and you are the owner or admin of an Azure subscription, you can also create a new Speech resource in Speech Studio by selecting **Create a new resource**. If you are a user role for a certain Azure subscription, you may not have the permission to create a new speech resource. Please contact your admin to get the speech resource access.
-4. You can modify your Speech resource at any time with the **Settings** option, located in the top nav.
-5. If you want to switch directory, please go the **Settings** or your profile to operate.
+The following diagram displays the process for fine-tuning the text-to-speech outputs.
-## How to use the tool?
-This diagram shows the steps it takes to fine-tune Text-to-Speech outputs. Use the links below to learn more about each step.
+Each step in the preceding diagram is described here:
+1. Choose the Speech resource you want to work with.
+
+1. [Create an audio tuning file](#create-an-audio-tuning-file) by using plain text or SSML scripts. Type or upload your content in to Audio Content Creation.
+1. Choose the voice and the language for your script content. Audio Content Creation includes all of the [Microsoft text-to-speech voices](language-support.md#text-to-speech). You can use prebuilt neural voices or a custom neural voice.
-1. Choose the speech resource you want to work on.
-2. [Create an audio tuning file](#create-an-audio-tuning-file) using plain text or SSML scripts. Type or upload your content in to Audio Content Creation.
-3. Choose the voice and the language for your script content. Audio Content Creation includes all of the [Microsoft Text-to-Speech voices](language-support.md#text-to-speech). You can use prebuilt neural voices or your custom neural voices.
> [!NOTE]
- > Gated access is available for Custom Neural Voice, which allow you to create high-definition voices similar to natural-sounding speech. For additional details, see [Gating process](./text-to-speech.md).
+ > Gated access is available for Custom Neural Voice, which allows you to create high-definition voices that are similar to natural-sounding speech. For more information, see [Gating process](./text-to-speech.md).
+
+1. Select the content you want to preview, and then select **Play** (triangle icon) to preview the default synthesis output.
+
+ If you make any changes to the text, select the **Stop** icon, and then select **Play** again to regenerate the audio with changed scripts.
+
+ Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md).
-4. Select the content you want to preview and select the **play** icon (a triangle) to preview the default synthesis output. Please note that if you make any changes on the text, you need to select the **Stop** icon and then select **play** icon again to re-generate the audio with changed scripts.
-5. Improve the output by adjusting pronunciation, break, pitch, rate, intonation, voice style, and more. For a complete list of options, see [Speech Synthesis Markup Language](speech-synthesis-markup.md). Here is a [video](https://youtu.be/ygApYuOOG6w) to show how to fine-tune speech output with Audio Content Creation.
-6. Save and [export your tuned audio](#export-tuned-audio). When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
+ For more information about fine-tuning speech output, view the [How to convert Text to Speech using Microsoft Azure AI voices](https://youtu.be/ygApYuOOG6w) video.
+
+1. Save and [export your tuned audio](#export-tuned-audio).
+
+ When you save the tuning track in the system, you can continue to work and iterate on the output. When you're satisfied with the output, you can create an audio creation task with the export feature. You can observe the status of the export task and download the output for use with your apps and products.
## Create an audio tuning file
-There are two ways to get your content into the Audio Content Creation tool.
+You can get your content into the Audio Content Creation tool in either of two ways:
+
+* **Option 1**
-**Option 1:**
+ 1. Select **New** > **File** to create a new audio tuning file.
-1. Select **New** > **file** to create a new audio tuning file.
-2. Type or paste your content into the editing window. The characters for each file is up to 20,000. If your script is longer than 20,000 characters, you can use Option 2 to automatically split your content into multiple files.
-3. Don't forget to save.
+ 1. Type or paste your content into the editing window. The allowable number of characters for each file is 20,000 or fewer. If your script contains more than 20,000 characters, you can use Option 2 to automatically split your content into multiple files.
+ 1. Select **Save**.
-**Option 2:**
+* **Option 2**
-1. Select **Upload** to import one or more text files. Both plain text and SSML are supported. If your script file is more than 20,000 characters, please split the file by paragraphs, by character or by regular expressions.
-3. When you upload your text files, make sure that the file meets these requirements.
+ 1. Select **Upload** to import one or more text files. Both plain text and SSML are supported.
- | Property | Value / Notes |
- |-||
- | File format | Plain text (.txt)<br/> SSML text (.txt)<br/> Zip files aren't supported |
- | Encoding format | UTF-8 |
- | File name | Each file must have a unique name. Duplicates aren't supported. |
- | Text length | Character limitation of the text file is 20,000. If your files exceed the limitation, please split the files with the instructions in the tool. |
- | SSML restrictions | Each SSML file can only contain a single piece of SSML. |
+ If your script file is more than 20,000 characters, split the content by paragraphs, by characters, or by regular expressions.
-**Plain text example**
+ 1. When you upload your text files, make sure that they meet these requirements:
-```txt
-Welcome to use Audio Content Creation to customize audio output for your products.
-```
+ | Property | Description |
+ |-||
+ | File format | Plain text (.txt)\*<br> SSML text (.txt)\**<br/> Zip files aren't supported. |
+ | Encoding format | UTF-8 |
+ | File name | Each file must have a unique name. Duplicate files aren't supported. |
+ | Text length | Character limit is 20,000. If your files exceed the limit, split them according to the instructions in the tool. |
+ | SSML restrictions | Each SSML file can contain only a single piece of SSML. |
+ | | |
-**SSML text example**
+ \* **Plain text example**:
-```xml
-<speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
- <voice name="Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)">
- Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
- </voice>
-</speak>
-```
+ ```txt
+ Welcome to use Audio Content Creation to customize audio output for your products.
+ ```
+
+ \** **SSML text example**:
+
+ ```xml
+ <speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
+ <voice name="Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)">
+ Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
+ </voice>
+ </speak>
+ ```
## Export tuned audio After you've reviewed your audio output and are satisfied with your tuning and adjustment, you can export the audio.
-1. Select **Export** to create an audio creation task. **Export to Audio Library** is recommended as it supports the long audio output and the full audio output experience. You can also download the audio to your local disk directly, but only the first 10 minutes are available.
-2. Choose the output format for your tuned audio. A list of supported formats and sample rates is available below.
-3. You can view the status of the task on the **Export task** tab. If the task fails, see the detailed information page for a full report.
-4. When the task is complete, your audio is available for download on the **Audio Library** tab.
-5. Select **Download**. Now you're ready to use your custom tuned audio in your apps or products.
+1. Select **Export** to create an audio creation task.
+
+ We recommend **Export to Audio Library**, because this option supports the long audio output and the full audio output experience. You can also download the audio to your local disk directly, but only the first 10 minutes are available.
+
+1. Choose the output format for your tuned audio. The **supported audio formats and sample rates** are listed in the following table:
+
+ | Format | 8 kHz sample rate | 16 kHz sample rate | 24 kHz sample rate | 48 kHz sample rate |
+ | | | | | |
+ | wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm |
+ | mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
+ | | |
+
+1. To view the status of the task, select the **Export task** tab.
+
+ If the task fails, see the detailed information page for a full report.
+
+1. When the task is complete, your audio is available for download on the **Audio Library** pane.
+
+1. Select **Download**. Now you're ready to use your custom tuned audio in your apps or products.
+
+## Add or remove Audio Content Creation users
+
+If more than one user wants to use Audio Content Creation, you can grant them access to the Azure subscription and the Speech resource. If you add users to an Azure subscription, they can access all the resources under the Azure subscription. But if you add users to a Speech resource only, they'll have access only to the Speech resource and not to other resources under this Azure subscription. Users with access to the Speech resource can use the Audio Content Creation tool.
+
+The users you grant access to need to set up a [Microsoft account](https://account.microsoft.com/account). If they don' have a Microsoft account, they can create one in just a few minutes. They can use their existing email and link it to a Microsoft account, or they can create and use an Outlook email address as a Microsoft account.
+
+### Add users to a Speech resource
+
+To add users to a Speech resource so that they can use Audio Content Creation, do the following:
-**Supported audio formats**
+1. In the [Azure portal](https://portal.azure.com/), search for and select **Cognitive Services**, and then select the Speech resource that you want to add users to.
+1. Select **Access control (IAM)**, select **Add**, and then select **Add role assignment (preview)** to open the **Add role assignment** pane.
+1. Select the **Role** tab, and then select the **Cognitive Service User** role. If you want to give a user ownership of this Speech resource, select the **Owner** role.
+1. Select the **Members** tab, enter a user's email address and select the user's name in the directory. The email address must be linked to a Microsoft account that's trusted by Azure Active Directory. Users can easily sign up for a [Microsoft account](https://account.microsoft.com/account) by using their personal email address.
+1. Select the **Review + assign** tab, and then select **Review + assign** to assign the role to a user.
-| Format | 8 kHz sample rate | 16 kHz sample rate | 24 kHz sample rate | 48 kHz sample rate |
-|--|--|--|--|--|
-| wav | riff-8khz-16bit-mono-pcm | riff-16khz-16bit-mono-pcm | riff-24khz-16bit-mono-pcm |riff-48khz-16bit-mono-pcm |
-| mp3 | N/A | audio-16khz-128kbitrate-mono-mp3 | audio-24khz-160kbitrate-mono-mp3 |audio-48khz-192kbitrate-mono-mp3 |
+Here is what happens next:
-## How to add/remove Audio Content Creation users?
+An email invitation is automatically sent to users. They can accept it by selecting **Accept invitation** > **Accept to join Azure** in their email. They're then redirected to the Azure portal. They don't need to take further action in the Azure portal. After a few moments, users are assigned the role at the Speech resource scope, which gives them access to this Speech resource. If users don't receive the invitation email, you can search for their account under **Role assignments** and go into their profile. Look for **Identity** > **Invitation accepted**, and select **(manage)** to resend the email invitation. You can also copy and send the invitation link to them.
-If more than one user wants to use Audio Content Creation, you can grant user access to the Azure subscription and the speech resource. If you add a user to an Azure subscription, the user can access all the resources under the Azure subscription. But if you only add a user to a speech resource, the user will only have access to the speech resource, and cannot access other resources under this Azure subscription. A user with access to the speech resource can use Audio Content Creation.
+Users now visit or refresh the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with their Microsoft account. They select **Audio Content Creation** block among all speech products. They choose the Speech resource in the pop-up window or in the settings at the upper right.
-The user need to prepare a [Microsoft account](https://account.microsoft.com/account). If the user do not have a Microsoft account, create one with just a few minutes. The user can use the existing email and link as a Microsoft account, or creat a new outlook email as Microsoft account.
+If they can't find the available Speech resource, they can check to ensure that they're in the right directory. To do so, they select the account profile at the upper right and then select **Switch** next to **Current directory**. If there's more than one directory available, it means they have access to multiple directories. They can switch to different directories and go to **Settings** to see whether the right Speech resource is available.
-### Add users to a speech resource
+Users who are in the same Speech resource will see each other's work in Audio Content Creation studio. If you want each individual user to have a unique and private workplace in Audio Content Creation, [create a new Speech resource](#step-2-create-a-speech-resource) for each user and give each user the unique access to the Speech resource.
-Follow these steps to add a user to a speech resource so they can use Audio Content Creation.
+### Remove users from a Speech resource
-1. Search for **Cognitive services** in the [Azure portal](https://portal.azure.com/), select the speech resource that you want to add users to.
-2. Select **Access control (IAM)**. Select **Add** > **Add role assignment (Preview)** to open the Add role assignment pane.
-1. On the **Role** tab, select the **Cognitive Service User** role. If you want to give the user ownership of this speech resource, you can select the **Owner** role.
-1. On the **Members** tab, type in user's email address and select the user in the directory. The email address must be a **Microsoft account**, which is trusted by Azure active directory. Users can easily sign up a [Microsoft account](https://account.microsoft.com/account) using a personal email address.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-1. The user will receive an email invitation. Accept the invitation by selecting **Accept invitation** > **Accept to join Azure** in the email. Then the user will be redirected to the Azure portal. The user does not need to take further action in the Azure portal. After a few moments, the user is assigned the role at the speech resource scope, and will have the access to this speech resource. If the user didn't receive the invitation email, you can search the user's account under "Role assignments" and go inside the user's profile. Find "Identity" -> "Invitation accepted", and select **(manage)** to resend the email invitation. You can also copy the invitation link to the users.
-1. The user now visits or refreshes the [Audio Content Creation](https://aka.ms/audiocontentcreation) product page, and sign in with the user's Microsoft account. Select **Audio Content Creation** block among all speech products. Choose the speech resource in the pop-up window or in the settings at the upper right of the page. If the user cannot find available speech resource, check if you are in the right directory. To check the right directory, select the account profile in the upper right corner, and select **Switch** besides the "Current directory". If there are more than one directory available, it means you have access to multiple directories. Switch to different directories and go to settings to see if the right speech resource is available.
+1. Search for **Cognitive services** in the Azure portal, select the Speech resource that you want to remove users from.
+1. Select **Access control (IAM)**, and then select the **Role assignments** tab to view all the role assignments for this Speech resource.
+1. Select the users you want to remove, select **Remove**, and then select **OK**.
- :::image type="content" source="media/audio-content-creation/add-role-first.png" alt-text="Add role dialog":::
+ :::image type="content" source="media/audio-content-creation/remove-user.png" alt-text="Screenshot of the 'Remove' button on the 'Remove role assignments' pane.":::
+### Enable users to grant access to others
-Users who are in the same speech resource will see each other's work in Audio Content Creation studio. If you want each individual user to have a unique and private workplace in Audio Content Creation, please [create a new speech resource](#step-2create-a-speech-resource) for each user and give each user the unique access to the speech resource.
+If you want to allow a user to grant access to other users, you need to assign them the owner role for the Speech resource and set the user as the Azure directory reader.
+1. Add the user as the owner of the Speech resource. For more information, see [Add users to a Speech resource](#add-users-to-a-speech-resource).
-### Remove users from a speech resource
+ :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Screenshot showing the 'Owner' role on the 'Add role assignment' pane. ":::
-1. Search for **Cognitive services** in the Azure portal, select the speech resource that you want to remove users from.
-2. Select **Access control (IAM)** > **Role assignments** tab to view all the role assignments for this speech resource.
-3. Select the users you want to remove, select **Remove** > **Ok**.
- :::image type="content" source="media/audio-content-creation/remove-user.png" alt-text="Remove button":::
+1. In the [Azure portal](https://portal.azure.com/), select the collapsed menu at the upper left, select **Azure Active Directory**, and then select **Users**.
+1. Search for the user's Microsoft account, go to their detail page, and then select **Assigned roles**.
+1. Select **Add assignments** > **Directory Readers**. If the **Add assignments** button is unavailable, it means that you don't have access. Only the global administrator of this directory can add assignments to users.
-### Enable users to grant access
+## See also
-If you want one of the users to give access to other users, you need to give the user the owner role for the speech resource and set the user as the Azure directory reader.
-1. Add the user as the owner of the speech resource. See [how to add users to a speech resource](#add-users-to-a-speech-resource).
- :::image type="content" source="media/audio-content-creation/add-role.png" alt-text="Role Owner field":::
-2. In the [Azure portal](https://portal.azure.com/), select the collapsed menu in the upper left. Select **Azure Active Directory**, and then Select **Users**.
-3. Search the user's Microsoft account, and go to the user's detail page. Select **Assigned roles**.
-4. Select **Add assignments** > **Directory Readers**. If the button "Add assignments" is grayed out, it means that you do not have the access. Only the global administrator of this directory can add assignment to users.
+* [Long Audio API](./long-audio-api.md)
## Next steps
cognitive-services How To Automatic Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md
Title: How to use language identification
-description: Language identification can be used with speech recognition to determine the language being spoken in speech audio being recognized.
+description: In this article, you learn how to use language identification with speech recognition to determine what language is being spoken in speech audio.
ms.devlang: cpp, csharp, java, javascript, objective-c, python
-# How to use language identification
+# Use language identification
-Language identification is used to determine the language being spoken in audio passed to the Speech SDK when compared against a list of provided languages. The value returned by language identification is then used to select the language model for speech to text, providing you with a more accurate transcription.
+You can use language identification to determine which language, from a list of languages, is being spoken in audio that's passed to the Speech SDK. The value that's returned by language identification is then used to select the language model for speech-to-text, which gives you a more accurate transcription.
-Language identification can also be used while doing [speech translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script%2cwindowsinstall#multi-lingual-translation-with-language-identification), or by doing [standalone identification](./language-identification.md).
+You can also use language identification for [speech translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script%2cwindowsinstall#multi-lingual-translation-with-language-identification) or for [standalone identification](./language-identification.md).
-To see which languages are available, see [Language support](language-support.md).
+For a list of available languages, see [Language support](language-support.md).
## Prerequisites
-This article assumes you have an Azure subscription and speech resource, and also assumes knowledge of speech recognition basics. [Complete the quickstart](get-started-speech-to-text.md) if you haven't already.
+* An Azure subscription and speech resource.
+* Knowledge of speech recognition basics. If you haven't already done so, [complete the speech-to-text quickstart](get-started-speech-to-text.md).
## Language identification with speech-to-text
-Language identification currently has a limit of **four languages** for at-start recognition, and **10 languages** for continuous recognition. Keep this limitation in mind when constructing your `AutoDetectSourceLanguageConfig` object. In the samples below, you use `AutoDetectSourceLanguageConfig` to define a list of possible languages that you want to identify, and then reference those languages when running speech recognition.
+Language identification currently has a limit of *four languages* for at-start recognition, and *10 languages* for continuous recognition. Keep this limitation in mind as you construct your `AutoDetectSourceLanguageConfig` object.
+
+In the following samples, you use `AutoDetectSourceLanguageConfig` to define a list of possible languages that you want to identify, and then reference those languages when you run speech recognition.
> [!IMPORTANT]
-> Continuous language identification is only supported in C#, C++, and Python.
+> Continuous language identification is supported only in C#, C++, and Python.
::: zone pivot="programming-language-csharp"
-The following example runs at-start recognition, prioritizing `Latency`. This property can also be set to `Accuracy` depending on the priority for your use-case. `Latency` is the best option to use if you need a low-latency result (e.g. for live streaming scenarios), but don't know the language in the audio sample.
+The following example runs at-start recognition, prioritizing `Latency`. You can also set this property to `Accuracy`, depending on the priority for your use case. `Latency` is your best option if you need a low-latency result (for example, for live streaming) but don't know which language is used in the audio sample.
-`Accuracy` should be used in scenarios where the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning, and allowing the engine more time will improve recognition results.
+You should use `Accuracy` in scenarios where the audio quality is poor and more latency is acceptable. For example, a voicemail message might have background noise or some silence at the beginning, and allowing the engine more time will improve recognition results.
-In either case, at-start recognition as shown below should **not be used** for scenarios where the language may be changing within the same audio sample. See below for continuous recognition for these types of scenarios.
+In either case, at-start recognition, as shown in the following code, should *not* be used for scenarios where one language changes to another within a single audio sample. (For *continuous recognition* for these types of scenarios, see the second C# code example.)
```csharp using Microsoft.CognitiveServices.Speech;
using (var recognizer = new SpeechRecognizer(
} ```
-The following example shows continuous speech recognition set up for a multilingual scenario. This example only uses `en-US` and `ja-JP` in the language config, but you can use up to **ten languages** for this design pattern. Each time speech is detected, the source language is identified and the audio is also converted to text output. This example uses `Latency` mode, which prioritizes response time.
+The following example shows continuous speech recognition set up for a multilingual scenario. This example uses only `en-US` and `ja-JP` in the language configuration, but you can use up to ten languages for this design pattern. Each time speech is detected, the source language is identified and the audio is converted to text output. This example uses `Latency` mode, which prioritizes response time.
```csharp using Microsoft.CognitiveServices.Speech;
using (var audioInput = AudioConfig.FromWavFileInput(@"path-to-your-audio-file.w
``` > [!NOTE]
-> `Latency` and `Accuracy` modes, and multilingual continuous recognition, are currently only supported in C#, C++, and Python.
+> `Latency` and `Accuracy` modes, and multilingual continuous recognition, are currently supported only in C#, C++, and Python.
::: zone-end ::: zone pivot="programming-language-cpp"
-The following example runs at-start recognition, prioritizing `Latency`. This property can also be set to `Accuracy` depending on the priority for your use-case. `Latency` is the best option to use if you need a low-latency result (e.g. for a live streaming case), but don't know the language in the audio sample.
+The following example runs at-start recognition, prioritizing `Latency`. You can also set this property to `Accuracy`, depending on the priority for your use case. `Latency` is your best option if you need a low-latency result (for example, for live streaming) but don't know which language is used in the audio sample.
-`Accuracy` should be used in scenarios where the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning, and allowing the engine more time will improve recognition results.
+You should use `Accuracy` in scenarios where the audio quality is poor and more latency is acceptable. For example, a voicemail message might have background noise or some silence at the beginning, and allowing the engine more time will improve recognition results.
-In either case, at-start recognition as shown below should **not be used** for scenarios where the language may be changing within the same audio sample. See below for continuous recognition for these types of scenarios.
+In either case, at-start recognition, as shown in the following code, should *not* be used for scenarios where one language changes to another within a single audio sample. (For *continuous recognition* for these types of scenarios, see the second C++ code example.)
```cpp using namespace std;
auto autoDetectSourceLanguageResult =
auto detectedLanguage = autoDetectSourceLanguageResult->Language; ```
-The following example shows continuous speech recognition set up for a multilingual scenario. This example only uses `en-US` and `ja-JP` in the language config, but you can use up to **ten languages** for this design pattern. Each time speech is detected, the source language is identified and the audio is also converted to text output. This example uses `Latency` mode, which prioritizes response time.
+The following example shows continuous speech recognition set up for a multilingual scenario. This example uses only `en-US` and `ja-JP` in the language configuration, but you can use up to ten languages for this design pattern. Each time speech is detected, the source language is identified and the audio is converted to text output. This example uses `Latency` mode, which prioritizes response time.
```cpp using namespace std;
recognizer->StopContinuousRecognitionAsync().get();
``` > [!NOTE]
-> `Latency` and `Accuracy` modes, and multilingual continuous recognition, are currently only supported in C#, C++, and Python.
+> `Latency` and `Accuracy` modes, and multilingual continuous recognition, are currently supported only in C#, C++, and Python.
::: zone-end
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
::: zone-end
-## Use language detection with a custom Speech-to-Text model
+## Use language detection with a custom speech-to-text model
-In addition to language identification using Speech service base models, you can also specify a custom model for enhanced recognition. If a custom model isn't provided, the service will use the default language model.
+In addition to using language identification with Speech service base models, you can specify a custom model for enhanced recognition. If a custom model isn't provided, the service uses the default language model.
-The snippets below illustrate how to specify a custom model in your call to the Speech service. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the endpoint for the custom model is used:
+The following code snippets illustrate how to specify a custom model in your call to the Speech service. If the detected language is `en-US`, the default model is used. If the detected language is `fr-FR`, the endpoint for the custom model is used, as shown here:
::: zone pivot="programming-language-csharp"
var autoDetectConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromSourceLangua
## Next steps ::: zone pivot="programming-language-csharp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L741) on GitHub for language identification
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#L741) on GitHub for language identification.
::: zone-end ::: zone pivot="programming-language-cpp"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L507) on GitHub for language identification
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L507) on GitHub for language identification.
::: zone-end ::: zone pivot="programming-language-java"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java#L521) on GitHub for language identification
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java#L521) on GitHub for language identification.
::: zone-end ::: zone pivot="programming-language-python"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L458) on GitHub for language identification
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L458) on GitHub for language identification.
::: zone-end ::: zone pivot="programming-language-objectivec"
-* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L525) on GitHub for language identification
+* See the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L525) on GitHub for language identification.
::: zone-end
cognitive-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-evaluate-data.md
Title: "Evaluate and improve Custom Speech accuracy - Speech service"
-description: "In this document you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided."
+description: "In this article, you learn how to quantitatively measure and improve the quality of our speech-to-text model or your custom model."
# Evaluate and improve Custom Speech accuracy
-In this article, you learn how to quantitatively measure and improve the accuracy of Microsoft speech-to-text models or your own custom models. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
+In this article, you learn how to quantitatively measure and improve the accuracy of the Microsoft speech-to-text model or your own custom models. Audio + human-labeled transcription data is required to test accuracy, and 30 minutes to 5 hours of representative audio should be provided.
## Evaluate Custom Speech accuracy
-The industry standard to measure model accuracy is [Word Error Rate](https://en.wikipedia.org/wiki/Word_error_rate) (WER). WER counts the number of incorrect words identified during recognition,
-then divides by the total number of words provided in the human-labeled transcript (shown below as N). Finally, that number is multiplied by 100% to calculate the WER.
+The industry standard for measuring model accuracy is [word error rate (WER)](https://en.wikipedia.org/wiki/Word_error_rate). WER counts the number of incorrect words identified during recognition, divides the sum by the total number of words provided in the human-labeled transcript (shown in the following formula as N), and then multiplies that quotient by 100 to calculate the error rate as a percentage.
-![WER formula](./media/custom-speech/custom-speech-wer-formula.png)
+![Screenshot showing the WER formula.](./media/custom-speech/custom-speech-wer-formula.png)
Incorrectly identified words fall into three categories:
Incorrectly identified words fall into three categories:
Here's an example:
-![Example of incorrectly identified words](./media/custom-speech/custom-speech-dis-words.png)
+![Screenshot showing an example of incorrectly identified words.](./media/custom-speech/custom-speech-dis-words.png)
-If you want to replicate WER measurements locally, you can use `sclite` from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK).
+If you want to replicate WER measurements locally, you can use the sclite tool from the [NIST Scoring Toolkit (SCTK)](https://github.com/usnistgov/SCTK).
## Resolve errors and improve WER
-You can use the WER from the machine recognition results to evaluate the quality of the model you are using with your app, tool, or product. A WER of 5%-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, however you may want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
+You can use the WER calculation from the machine recognition results to evaluate the quality of the model you're using with your app, tool, or product. A WER of 5-10% is considered to be good quality and is ready to use. A WER of 20% is acceptable, but you might want to consider additional training. A WER of 30% or more signals poor quality and requires customization and training.
-How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you'll need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk may be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms has been provided as either human-labeled transcriptions or related text.
+How the errors are distributed is important. When many deletion errors are encountered, it's usually because of weak audio signal strength. To resolve this issue, you need to collect audio data closer to the source. Insertion errors mean that the audio was recorded in a noisy environment and crosstalk might be present, causing recognition issues. Substitution errors are often encountered when an insufficient sample of domain-specific terms has been provided as either human-labeled transcriptions or related text.
By analyzing individual files, you can determine what type of errors exist, and which errors are unique to a specific file. Understanding issues at the file level will help you target improvements. ## Create a test
-If you'd like to test the quality of Microsoft's speech-to-text baseline model or a custom model that you've trained, you can compare two models side by side to evaluate accuracy. The comparison includes WER and recognition results. Typically, a custom model is compared with Microsoft's baseline model.
+If you want to test the quality of the Microsoft speech-to-text baseline model or a custom model that you've trained, you can compare two models side by side. The comparison includes WER and recognition results. A custom model is ordinarily compared with the Microsoft baseline model.
-To evaluate models side by side:
+To evaluate models side by side, do the following:
1. Sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-2. Navigate to **Speech-to-text** > **Custom Speech** > [Project Name] > **Testing**.
-3. Select **Add Test**.
-4. Select **Evaluate accuracy**. Give the test a name, description, and select your audio + human-labeled transcription dataset.
-5. Select up to two models that you'd like to test.
-6. Select **Create**.
+
+1. Select **Speech-to-text** > **Custom Speech** > **\<name of project>** > **Testing**.
+1. Select **Add Test**.
+1. Select **Evaluate accuracy**. Give the test a name and description, and then select your audio + human-labeled transcription dataset.
+1. Select up to two models that you want to test.
+1. Select **Create**.
After your test has been successfully created, you can compare the results side by side. ### Side-by-side comparison
-Once the test is complete, indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select on the test name to view the testing detail page. This detail page lists all the utterances in your dataset, indicating the recognition results of the two models alongside the transcription from the submitted dataset. To help inspect the side-by-side comparison, you can toggle various error types including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which shows the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and where additional training and improvements are required.
+After the test is complete, as indicated by the status change to *Succeeded*, you'll find a WER number for both models included in your test. Select the test name to view the test details page. This page lists all the utterances in your dataset and the recognition results of the two models, alongside the transcription from the submitted dataset.
+
+To inspect the side-by-side comparison, you can toggle various error types, including insertion, deletion, and substitution. By listening to the audio and comparing recognition results in each column, which display the human-labeled transcription and the results for two speech-to-text models, you can decide which model meets your needs and determine where additional training and improvements are required.
## Improve Custom Speech accuracy Speech recognition scenarios vary by audio quality and language (vocabulary and speaking style). The following table examines four common scenarios:
-| Scenario | Audio Quality | Vocabulary | Speaking Style |
+| Scenario | Audio quality | Vocabulary | Speaking style |
|-|||-|
-| Call center | Low, 8 kHz, could be two people on one audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
-| Voice assistant (such as Cortana, or a drive-through window) | High, 16 kHz | Entity heavy (song titles, products, locations) | Clearly stated words and phrases |
-| Dictation (instant message, notes, search) | High, 16 kHz | Varied | Note-taking |
+| Call center | Low, 8&nbsp;kHz, could be two people on one audio channel, could be compressed | Narrow, unique to domain and products | Conversational, loosely structured |
+| Voice assistant, such as Cortana, or a drive-through window | High, 16&nbsp;kHz | Entity-heavy (song titles, products, locations) | Clearly stated words and phrases |
+| Dictation (instant message, notes, search) | High, 16&nbsp;kHz | Varied | Note-taking |
| Video closed captioning | Varied, including varied microphone use, added music | Varied, from meetings, recited speech, musical lyrics | Read, prepared, or loosely structured |
+| | |
-Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [word error rate (WER)](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario.
+Different scenarios produce different quality outcomes. The following table examines how content from these four scenarios rates in the [WER](how-to-custom-speech-evaluate-data.md). The table shows which error types are most common in each scenario.
-| Scenario | Speech Recognition Quality | Insertion Errors | Deletion Errors | Substitution Errors |
-|-|-||--||
-| Call center | Medium (< 30% WER) | Low, except when other people talk in the background | Can be high. Call centers can be noisy, and overlapping speakers can confuse the model | Medium. Products and people's names can cause these errors |
-| Voice assistant | High (can be < 10% WER) | Low | Low | Medium, due to song titles, product names, or locations |
-| Dictation | High (can be < 10% WER) | Low | Low | High |
-| Video closed captioning | Depends on video type (can be < 50% WER) | Low | Can be high due to music, noises, microphone quality | Jargon may cause these errors |
+| Scenario | Speech recognition quality | Insertion errors | Deletion errors | Substitution errors |
+| | | | | |
+| Call center | Medium<br>(<&nbsp;30%&nbsp;WER) | Low, except when other people talk in the background | Can be high. Call centers can be noisy, and overlapping speakers can confuse the model | Medium. Products and people's names can cause these errors |
+| Voice assistant | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | Medium, due to song titles, product names, or locations |
+| Dictation | High<br>(can be <&nbsp;10%&nbsp;WER) | Low | Low | High |
+| Video closed captioning | Depends on video type (can be <&nbsp;50%&nbsp;WER) | Low | Can be high because of music, noises, microphone quality | Jargon might cause these errors |
+| | |
Determining the components of the WER (number of insertion, deletion, and substitution errors) helps determine what kind of data to add to improve the model. Use the [Custom Speech portal](https://speech.microsoft.com/customspeech) to view the quality of a baseline model. The portal reports insertion, substitution, and deletion error rates that are combined in the WER quality rate.
Determining the components of the WER (number of insertion, deletion, and substi
You can reduce recognition errors by adding training data in the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-Plan to maintain your custom model by adding source materials periodically. Your custom model needs additional training to stay aware of changes to your entities. For example, you may need updates to product names, song names, or new service locations.
+Plan to maintain your custom model by adding source materials periodically. Your custom model needs additional training to stay aware of changes to your entities. For example, you might need updates to product names, song names, or new service locations.
The following sections describe how each kind of additional training data can reduce errors.
When you train a new custom model, start by adding plain text sentences of relat
### Add structured text data
-You can use structured text data in markdown format similarly to plain text sentences, but you would use structured text data when your data follows a particular pattern in particular utterances that only differ by words or phrases from a list. For more information, see [Structured text data for training](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview).
+You can use structured text data in markdown format as you would with plain text sentences, but you would use structured text data when your data follows a particular pattern in particular utterances that differ only by words or phrases from a list. For more information, see [Structured text data for training](how-to-custom-speech-test-and-train.md#structured-text-data-for-training-public-preview).
> [!NOTE]
-> Training with structured text is only supported for these locales: `en-US`, `de-DE`, `en-UK`, `en-IN`, `fr-FR`, `fr-CA`, `es-ES`, `es-MX` and you must use the latest base model for these locales. See [Language support](language-support.md) for a list of base models that support training with structured text data.
+> Training with structured text is supported only for these locales: en-US, de-DE, en-UK, en-IN, fr-FR, fr-CA, es-ES, and es-MX. You must use the latest base model for these locales. See [Language support](language-support.md) for a list of base models that support training with structured text data.
>
-> For locales that donΓÇÖt support training with structured text, the service will take any training sentences that donΓÇÖt reference any classes as part of training with plain text data.
+> For locales that donΓÇÖt support training with structured text, the service will take any training sentences that donΓÇÖt reference classes as part of training with plain text data.
### Add audio with human-labeled transcripts
-Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get most calls about swimwear and sunglasses during summer months. Assure that your sample includes the full scope of speech you want to detect.
+Audio with human-labeled transcripts offers the greatest accuracy improvements if the audio comes from the target use case. Samples must cover the full scope of speech. For example, a call center for a retail store would get the most calls about swimwear and sunglasses during summer months. Ensure that your sample includes the full scope of speech that you want to detect.
Consider these details:
-* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by just using related text.
-* If you use one of the most heavily used languages such as US-English, there's a good chance that there's no need to train with audio data. For such languages, the base models offer already very good recognition results in most scenarios; it's probably enough to train with related text.
-* Custom Speech can only capture word context to reduce substitution errors, not insertion, or deletion errors.
+* Training with audio will bring the most benefits if the audio is also hard to understand for humans. In most cases, you should start training by using only related text.
+* If you use one of the most heavily used languages, such as US English, it's unlikely that you would need to train with audio data. For such languages, the base models already offer very good recognition results in most scenarios, so it's probably enough to train with related text.
+* Custom Speech can capture word context only to reduce substitution errors, not insertion or deletion errors.
* Avoid samples that include transcription errors, but do include a diversity of audio quality.
-* Avoid sentences that are not related to your problem domain. Unrelated sentences can harm your model.
-* When the quality of transcripts varies, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
-* The Speech service will automatically use the transcripts to improve the recognition of domain-specific words and phrases, as if they were added as related text.
-* It can take several days for a training operation to complete. To improve the speed of training, make sure to create your Speech service subscription in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+* Avoid sentences that are unrelated to your problem domain. Unrelated sentences can harm your model.
+* When the transcript quality varies, you can duplicate exceptionally good sentences (like excellent transcriptions that include key phrases) to increase their weight.
+* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text.
+* It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a [region that has dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+ > [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.
+> Not all base models support training with audio. If a base model doesn't support audio, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text). Even if a base model does support training with audio data, the service might use only part of the audio. And it will still use all the transcripts.
> [!NOTE]
-> In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+> When you change the base model that's used for training, and you have audio in the training dataset, *always* check to see whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model didn't support training with audio data, and the training dataset contains audio, training time with the new base model will *drastically* increase. The duration might easily go from several hours to several days or longer. This is especially true if your Speech service subscription is *not* in a [region that has the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
>
-> If you face the issue described in the paragraph above, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. The latter option is highly recommended if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+> If you face this issue, you can decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. We recommend the latter option if your Speech service subscription is *not* in a region that has such dedicated hardware.
### Add new words with pronunciation
-Words that are made-up or highly specialized may have unique pronunciations. These words can be recognized if the word can be broken down into smaller words to pronounce it. For example, to recognize **Xbox**, pronounce as **X box**. This approach will not increase overall accuracy, but can increase recognition of these keywords.
+Words that are made up or highly specialized might have unique pronunciations. These words can be recognized if they can be broken down into smaller words to pronounce them. For example, to recognize *Xbox*, pronounce it as *X box*. This approach won't increase overall accuracy, but can improve recognition of this and other keywords.
> [!NOTE]
-> This technique is only available for some languages at this time. See customization for pronunciation in [the Speech-to-text table](language-support.md) for details.
+> This technique is available for only certain languages at this time. To see which languages support customization of pronunciation, search for "Pronunciation" in the **Customizations** column in the [speech-to-text table](language-support.md#speech-to-text).
## Sources by scenario
-The following table shows voice recognition scenarios and lists source materials to consider within the three training content categories listed above.
+The following table shows voice recognition scenarios and lists source materials to consider within the three previously mentioned training content categories.
| Scenario | Plain text data and <br> structured text data | Audio + human-labeled transcripts | New words with pronunciation |
-|-||||
-| Call center | marketing documents, website, product reviews related to call center activity | call center calls transcribed by humans | terms that have ambiguous pronunciations (see Xbox above) |
-| Voice assistant | list sentences using all combinations of commands and entities | record voices speaking commands into device, and transcribe into text | names (movies, songs, products) that have unique pronunciations |
-| Dictation | written input, like instant messages or emails | similar to above | similar to above |
-| Video closed captioning | TV show scripts, movies, marketing content, video summaries | exact transcripts of videos | similar to above |
+| | | | |
+| Call center | Marketing documents, website, product reviews related to call center activity | Call center calls transcribed by humans | Terms that have ambiguous pronunciations (see the *Xbox* example in the preceding section) |
+| Voice assistant | Lists of sentences that use various combinations of commands and entities | Recorded voices speaking commands into device, transcribed into text | Names (movies, songs, products) that have unique pronunciations |
+| Dictation | Written input, such as instant messages or emails | Similar to preceding examples | Similar to preceding examples |
+| Video closed captioning | TV show scripts, movies, marketing content, video summaries | Exact transcripts of videos | Similar to preceding examples |
+| | |
## Next steps
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Title: Train and deploy a Custom Speech model - Speech service
-description: Learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model or a for custom model.
+description: Learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model or a custom model.
# Train and deploy a Custom Speech model
-In this article, you'll learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model. You use human-labeled transcriptions and related text to train a model. These datasets, along with previously uploaded audio data, are used to refine and train the speech-to-text model.
+In this article, you'll learn how to train and deploy Custom Speech models. Training a speech-to-text model can improve recognition accuracy for the Microsoft baseline model. You use human-labeled transcriptions and related text to train a model. And you use these datasets, along with previously uploaded audio data, to refine and train the speech-to-text model.
## Use training to resolve accuracy problems
-If you're encountering recognition problems with a base model, you can use human-labeled transcripts and related data to train a custom model and help improve accuracy. Use this table to determine which dataset to use to address your problems:
+If you're encountering recognition problems with a base model, you can use human-labeled transcripts and related data to train a custom model and help improve accuracy. To determine which dataset to use to address your problems, refer to the following table:
| Use case | Data type | | -- | |
-| Improve recognition accuracy on industry-specific vocabulary and grammar, like medical terminology or IT jargon | Plain text or structured text data |
-| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, like product names or acronyms | Pronunciation data or phonetic pronunciation in structured text |
-| Improve recognition accuracy on speaking styles, accents, or specific background noises | Audio + human-labeled transcripts |
+| Improve recognition accuracy on industry-specific vocabulary and grammar, such as medical terminology or IT jargon. | Plain text or structured text data |
+| Define the phonetic and displayed form of a word or term that has nonstandard pronunciation, such as product names or acronyms. | Pronunciation data or phonetic pronunciation in structured text |
+| Improve recognition accuracy on speaking styles, accents, or specific background noises. | Audio + human-labeled transcripts |
+| | |
## Train and evaluate a model
-The first step to train a model is to upload training data. See [Prepare and test your data](./how-to-custom-speech-test-and-train.md) for step-by-step instructions to prepare human-labeled transcriptions and related text (utterances and pronunciations). After you upload training data, follow these instructions to start training your model:
+The first step in training a model is to upload training data. For step-by-step instructions for preparing human-labeled transcriptions and related text (utterances and pronunciations), see [Prepare and test your data](./how-to-custom-speech-test-and-train.md). After you upload training data, follow these instructions to start training your model:
1. Sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech). If you plan to train a model with audio + human-labeled transcription datasets, pick a Speech subscription in a [region with dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
-2. Go to **Speech-to-text** > **Custom Speech** > **[name of project]** > **Training**.
-3. Select **Train model**.
-4. Give your training a **Name** and **Description**.
-5. In the **Scenario and Baseline model** list, select the scenario that best fits your domain. If you're not sure which scenario to choose, select **General**. The baseline model is the starting point for training. The latest model is usually the best choice.
-6. On the **Select training data** page, choose one or more related text datasets or audio + human-labeled transcription datasets that you want to use for training.
-> [!NOTE]
-> When you train a new model, start with related text; training with audio + human-labeled transcription might take much longer **(up to [several days](how-to-custom-speech-evaluate-data.md#add-audio-with-human-labeled-transcripts)**).
+1. Go to **Speech-to-text** > **Custom Speech** > **[name of project]** > **Training**.
-> [!NOTE]
-> Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
+1. Select **Train model**.
-> [!NOTE]
-> In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
->
-> If you face the issue described in the paragraph above, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. The latter option is highly recommended if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+1. Give your training a **Name** and **Description**.
+
+1. In the **Scenario and Baseline model** list, select the scenario that best fits your domain. If you're not sure which scenario to choose, select **General**. The baseline model is the starting point for training. The most recent model is usually the best choice.
+
+1. On the **Select training data** page, choose one or more related text datasets or audio + human-labeled transcription datasets that you want to use for training.
+
+ > [!NOTE]
+ > When you train a new model, start with related text. Training with audio + human-labeled transcription might take much longer (up to [several days](how-to-custom-speech-evaluate-data.md#add-audio-with-human-labeled-transcripts)).
+
+ > [!NOTE]
+ > Not all base models support training with audio. If a base model doesn't support it, the Speech service will use only the text from the transcripts and ignore the audio. For a list of base models that support training with audio data, see [Language support](language-support.md#speech-to-text).
+
+ > [!NOTE]
+ > In cases when you change the base model used for training, and you have audio in the training dataset, *always* check to see whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model didn't support training with audio data, and the training dataset contains audio, training time with the new base model will *drastically* increase, and might easily go from several hours to several days and more. This is especially true if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
+ >
+ > If you face the issue described in the preceding paragraph, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. We recommend the latter option if your Speech service subscription is *not* in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
-7. After training is complete, you can do accuracy testing on the newly trained model. This step is optional.
-8. Select **Create** to build your custom model.
+1. After training is complete, you can do accuracy testing on the newly trained model. This step is optional.
+1. Select **Create** to build your custom model.
-The **Training** table displays a new entry that corresponds to the new model. The table also displays the status: **Processing**, **Succeeded**, or **Failed**.
+The **Training** table displays a new entry that corresponds to the new model. The table also displays the status: *Processing*, *Succeeded*, or *Failed*.
-See the [how-to](how-to-custom-speech-evaluate-data.md) on evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model to get a realistic sense of the model's performance.
+For more information, see the [how-to article](how-to-custom-speech-evaluate-data.md) about evaluating and improving Custom Speech model accuracy. If you choose to test accuracy, it's important to select an acoustic dataset that's different from the one you used with your model. This approach can provide a more realistic sense of the model's performance.
> [!NOTE]
-> Base models and custom models can be used up to a certain date as described in [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md). Speech Studio shows this date in the **Expiration** column for each model and endpoint. After that date request to an endpoint or to batch transcription might fail or fall back to base model.
+> Both base models and custom models can be used only up to a certain date (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). Speech Studio displays this date in the **Expiration** column for each model and endpoint. After that date, a request to an endpoint or to batch transcription might fail or fall back to base model.
>
-> Retrain your model using the then most recent base model to benefit from accuracy improvements and to avoid that your model expires.
+> Retrain your model by using the most recent base model to benefit from accuracy improvements and to avoid allowing your model to expire.
## Deploy a custom model After you upload and inspect data, evaluate accuracy, and train a custom model, you can deploy a custom endpoint to use with your apps, tools, and products.
-To create a custom endpoint, sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech). Select **Deployment** in the **Custom Speech** menu at the top of the page. If this is your first run, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
+1. To create a custom endpoint, sign in to the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-Next, select **Add endpoint** and enter a **Name** and **Description** for your custom endpoint. Then select the custom model that you want to associate with the endpoint. You can also enable logging from this page. Logging allows you to monitor endpoint traffic. If logging is disabled, traffic won't be stored.
+1. On the **Custom Speech** menu at the top of the page, select **Deployment**.
-![Screenshot that shows the New endpoint page.](./media/custom-speech/custom-speech-deploy-model.png)
+ If this is your first run, you'll notice that there are no endpoints listed in the table. After you create an endpoint, you use this page to track each deployed endpoint.
-> [!NOTE]
-> Don't forget to accept the terms of use and pricing details.
+1. Select **Add endpoint**, and then enter a **Name** and **Description** for your custom endpoint.
+
+1. Select the custom model that you want to associate with the endpoint.
+
+ You can also enable logging from this page. Logging allows you to monitor endpoint traffic. If logging is disabled, traffic won't be stored.
+
+ ![Screenshot showing the selected `Log content from this endpoint` checkbox on the `New endpoint` page.](./media/custom-speech/custom-speech-deploy-model.png)
+
+ > [!NOTE]
+ > Remember to accept the terms of use and pricing details.
-Next, select **Create**. This action returns you to the **Deployment** page. The table now includes an entry that corresponds to your custom endpoint. The endpointΓÇÖs status shows its current state. It can take up to 30 minutes to instantiate a new endpoint using your custom models. When the status of the deployment changes to **Complete**, the endpoint is ready to use.
+1. Select **Create**.
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to see information specific to your endpoint, like the endpoint key, endpoint URL, and sample code. Take a note of the expiration date and update the endpoint's model before that date to ensure uninterrupted service.
+ This action returns you to the **Deployment** page. The table now includes an entry that corresponds to your custom endpoint. The endpointΓÇÖs status shows its current state. It can take up to 30 minutes to instantiate a new endpoint that uses your custom models. When the status of the deployment changes to **Complete**, the endpoint is ready to use.
+
+ After your endpoint is deployed, the endpoint name appears as a link.
+
+1. Select the endpoint link to view information specific to it, such as the endpoint key, endpoint URL, and sample code.
+
+ Note the expiration date, and update the endpoint's model before that date to ensure uninterrupted service.
## View logging data
-Logging data is available for export if you go to the endpoint's page under **Deployments**.
+Logging data is available for export from the endpoint's page, under **Deployments**.
+ > [!NOTE]
->Logging data is available for 30 days on Microsoft-owned storage. It will be removed afterwards. If a customer-owned storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
+> Logging data is available on Microsoft-owned storage for 30 days, after which it will be removed. If a customer-owned storage account is linked to the Cognitive Services subscription, the logging data won't be automatically deleted.
## Additional resources
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Title: "Create a Custom Voice - Speech service"
+ Title: Create a custom voice - Speech service
-description: "When you're ready to upload your data, go to the Custom Voice portal. Create or select a Custom Voice project. The project must share the right language/locale and the gender properties as the data you intend to use for your voice training."
+description: When you're ready to upload your data, go to the Custom Voice portal. Create or select a Custom Voice project. The project must share the right language, locale, and gender properties as the data you intend to use for your voice training.
# Create and use your voice model
-In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice and the different format requirements. Once you've prepared your data and the voice talent verbal statement, you can start to upload them to the [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal. See the [supported languages](language-support.md#custom-neural-voice) for custom neural voice.
+In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice, and the different format requirements. After you've prepared your data and the voice talent verbal statement, you can start to upload them to [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal.
## Prerequisites
In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned abo
## Set up voice talent
-A voice talent is an individual or target speaker whose voices are recorded and used to create neural voice models. Before you create a voice, define your voice persona and select a right voice talent. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md).
+A *voice talent* is an individual or target speaker whose voices are recorded and used to create neural voice models. Before you create a voice, define your voice persona and select a right voice talent. For details on recording voice samples, see [the tutorial](record-custom-voice-samples.md).
-To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent consenting to the usage of their speech data to train a custom voice model. When preparing your recording script, make sure you include the statement sentence. You can find the statement in multiple languages [here](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording. You need to upload this audio file to the Speech Studio as shown below to create a voice talent profile, which is used to verify against your training data when you create a voice model. Read more about the [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) here.
+To train a neural voice, you must create a voice talent profile with an audio file recorded by the voice talent, consenting to the usage of their speech data to train a custom voice model. When you prepare your recording script, make sure you include the statement sentence. You can find the statement in multiple languages on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt). The language of the verbal statement must be the same as your recording.
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Upload voice talent statement":::
+Upload this audio file to the Speech Studio as shown in the following screenshot. You create a voice talent profile, which is used to verify against your training data when you create a voice model. For more information, see [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+
+ :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot that shows the upload voice talent statement.":::
> [!NOTE]
-> Custom neural voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
+> Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
-The following steps assume you've prepared the voice talent verbal consent files. Go to [Speech Studio](https://aka.ms/custom-voice-portal) to select a custom neural voice project, then follow the following steps to create a voice talent profile.
+The following steps assume that you've prepared the voice talent verbal consent files. Go to [Speech Studio](https://aka.ms/custom-voice-portal) to select a Custom Neural Voice project, and then follow these steps to create a voice talent profile.
-1. Navigate to **Text-to-Speech** > **Custom Voice** > **select a project** > **Set up voice talent**.
+1. Go to **Text-to-Speech** > **Custom Voice** > **select a project**, and select **Set up voice talent**.
-2. Select **Add voice talent**.
+1. Select **Add voice talent**.
-3. Next, to define voice characteristics, select **Target scenario** to be used. Then describe your **Voice characteristics**.
+1. Next, to define voice characteristics, select **Target scenario**. Then describe your **Voice characteristics**.
-> [!NOTE]
-> The scenarios you provide must be consistent with what you've applied for in the application form.
+ >[!NOTE]
+ >The scenarios you provide must be consistent with what you've applied for in the application form.
-4. Then, go to **Upload voice talent statement**, follow the instruction to upload voice talent statement you've prepared beforehand.
+1. Then, go to **Upload voice talent statement**, and follow the instruction to upload the voice talent statement you've prepared beforehand.
-> [!NOTE]
-> Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
+ >[!NOTE]
+ >Make sure the verbal statement is recorded in the same settings as your training data, including the recording environment and speaking style.
-5. Finally, go to **Review and create**, you can review the settings and select **Submit**.
+1. Go to **Review and create**, review the settings, and select **Submit**.
## Upload your data
-When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A training set is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. Data readiness checking will be done per each training set. You can import multiple data to a training set.
+When you're ready to upload your data, go to the **Prepare training data** tab to add your first training set and upload data. A *training set* is a set of audio utterances and their mapping scripts used for training a voice model. You can use a training set to organize your training data. The service checks data readiness per each training set. You can import multiple data to a training set.
-You can do the following to create and review your training data.
+You can do the following to create and review your training data:
1. Select **Prepare training data** > **Add training set**. 1. Enter **Name** and **Description**, and then select **Create** to add a new training set. When the training set is successfully created, you can start to upload your data.
-1. Select **Upload data** > **Choose data type** > **Upload data** > **Specify the target training set**.
-1. Enter **Name** and **Description** for your data > review the settings and select **Submit**.
+1. Select **Upload data** > **Choose data type** > **Upload data**. Then select **Specify the target training set**.
+1. Enter the name and description for your data, review the settings, and select **Submit**.
> [!NOTE]
->- Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicate, they'll be rejected.
->- If you've created data files in the previous version of Speech Studio, you must specify a training set for your data in advance to use them. Or else, an exclamation mark will be appended to the data name, and the data could not be used.
+>- Duplicate audio names are removed from the training. Make sure the data you select don't contain the same audio names within the .zip file or across multiple .zip files. If utterance IDs (either in audio or script files) are duplicates, they're rejected.
+>- If you've created data files in the previous version of Speech Studio, you must specify a training set for your data in advance to use them. If you haven't, an exclamation mark is appended to the data name, and the data can't be used.
-All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md) and make sure your data has been rightly formatted.
+All data you upload must meet the requirements for the data type that you choose. It's important to correctly format your data before it's uploaded, which ensures the data will be accurately processed by the Speech service. Go to [Prepare training data](how-to-custom-voice-prepare-data.md), and confirm that your data is correctly formatted.
> [!NOTE] > - Standard subscription (S0) users can upload five data files simultaneously. If you reach the limit, wait until at least one of your data files finishes importing. Then try again.
-> - The maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users and 500 for standard subscription (S0) users.
+> - The maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
-Data files are automatically validated once you hit the **Submit** button. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. Fix the errors if any and submit again.
+Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
-Once the data is uploaded, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
+After you upload the data, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
A higher signal-to-noise ratio (SNR) indicates lower noise in your audio. You can typically reach a 35+ SNR by recording at professional studios. Audio with an SNR below 20 can result in obvious noise in your generated voice. Consider re-recording any utterances with low pronunciation scores or poor signal-to-noise ratios. If you can't re-record, consider excluding those utterances from your data.
-On the **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message displayed to fix them before training.
+On **Data details**, you can check the data details of the training set. If there are any typical issues with the data, follow the instructions in the message that appears, to fix them before training.
-The issues are divided into three types. Referring to the following three tables to check the respective types of errors. Data with these errors will be excluded during training.
+The issues are divided into three types. Refer to the following tables to check the respective types of errors. Data with these errors will be excluded during training.
| Category | Name | Description | | | -- | |
-| Script | Invalid separator| You must separate the utterance ID and the script content with a TAB character.|
-| Script | Invalid script ID| Script line ID must be numeric.|
+| Script | Invalid separator| You must separate the utterance ID and the script content with a Tab character.|
+| Script | Invalid script ID| The script line ID must be numeric.|
| Script | Duplicated script|Each line of the script content must be unique. The line is duplicated with {}.| | Script | Script too long| The script must be less than 1,000 characters.| | Script | No matching audio| The ID of each utterance (each line of the script file) must match the audio ID.|
-| Script | No valid script| No valid script found in this dataset. Fix the script lines that appear in the detailed issue list.|
-| Audio | No matching script| No audio files match the script ID. The name of the wav files must match with the IDs in the script file.|
-| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the wav file format using an audio tool like [SoX](http://sox.sourceforge.net/).|
-| Audio | Low sampling rate| The sampling rate of the .wav files cannot be lower than 16 KHz.|
-| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. We suggest utterances should be shorter than 15 seconds.|
+| Script | No valid script| No valid script is found in this dataset. Fix the script lines that appear in the detailed issue list.|
+| Audio | No matching script| No audio files match the script ID. The name of the .wav files must match with the IDs in the script file.|
+| Audio | Invalid audio format| The audio format of the .wav files is invalid. Check the .wav file format by using an audio tool like [SoX](http://sox.sourceforge.net/).|
+| Audio | Low sampling rate| The sampling rate of the .wav files can't be lower than 16 KHz.|
+| Audio | Too long audio| Audio duration is longer than 30 seconds. Split the long audio into multiple files. It's a good idea to make utterances shorter than 15 seconds.|
| Audio | No valid audio| No valid audio is found in this dataset. Check your audio data and upload again.|
-The second type of errors listed in the next table will be automatically fixed, but double checking the fixed data is recommended.
+The following errors are fixed automatically, but you should confirm that the fixes have been made.
| Category | Name | Description | | | -- | | | Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. | | Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
-Unresolved errors listed in the next table will affect the quality of training. However, the data with these errors won't be excluded during training. For higher-quality training, manually fixing these errors is recommended.
+Unresolved errors listed in the next table affect the quality of training, but data with these errors won't be excluded during training. For higher-quality training, it's a good idea to fix these errors manually.
| Category | Name | Description | | | -- | |
-| Script | Non-normalized text|This script contains digit 0-9. Expand them to normalized words and match with the audio. For example, normalize "123" to "one hundred and twenty-three".|
-| Script | Non-normalized text|This script contains symbols {}. Normalize the symbols to match the audio. For example, '50%' to "fifty percent".|
-| Script | Not enough question utterances| At least 10% of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
-| Script |Not enough exclamation utterances| At least 10% of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
-| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. It will be automatically upsampled to 24 KHz if it's lower.|
-| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10% of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
+| Script | Non-normalized text|This script contains digits. Expand them to normalized words, and match with the audio. For example, normalize *123* to *one hundred and twenty-three*.|
+| Script | Non-normalized text|This script contains symbols. Normalize the symbols to match the audio. For example, normalize *50%* to *fifty percent*.|
+| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.|
+| Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.|
+| Audio| Low sampling rate for neural voice | It's recommended that the sampling rate of your .wav files should be 24 KHz or higher for creating neural voices. If it's lower, it will be automatically raised to 24 KHz.|
+| Volume |Overall volume too low|Volume shouldn't be lower than -18 dB (10 percent of max volume). Control the volume average level within proper range during the sample recording or data preparation.|
| Volume | Volume overflow| Overflowing volume is detected at {}s. Adjust the recording equipment to avoid the volume overflow at its peak value.|
-| Volume | Start silence issue | The first 100 ms silence isn't clean. Reduce the recording noise floor level and leave the first 100 ms at the start silent.|
-| Volume| End silence issue| The last 100 ms silence isn't clean. Reduce the recording noise floor level and leave the last 100 ms at the end silent.|
-| Mismatch | Low scored words|Review the script and the audio content to make sure they match and control the noise floor level. Reduce the length of long silence or split the audio into multiple utterances if it's too long.|
+| Volume | Start silence issue | The first 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the first 100 ms at the start silent.|
+| Volume| End silence issue| The last 100 ms of silence isn't clean. Reduce the recording noise floor level, and leave the last 100 ms at the end silent.|
+| Mismatch | Low scored words|Review the script and the audio content to make sure they match, and control the noise floor level. Reduce the length of long silence, or split the audio into multiple utterances if it's too long.|
| Mismatch | Start silence issue |Extra audio was heard before the first word. Review the script and the audio content to make sure they match, control the noise floor level, and make the first 100 ms silent.| | Mismatch | End silence issue| Extra audio was heard after the last word. Review the script and the audio content to make sure they match, control the noise floor level, and make the last 100 ms silent.| | Mismatch | Low signal-noise ratio | Audio SNR level is lower than 20 dB. At least 35 dB is recommended.| | Mismatch | No score available |Failed to recognize speech content in this audio. Check the audio and the script content to make sure the audio is valid, and matches the script.|
-## Train your custom neural voice model
+## Train your Custom Neural Voice model
-After your data files have been validated, you can use them to build your custom neural voice model.
+After you validate your data files, you can use them to build your Custom Neural Voice model.
1. On the **Train model** tab, select **Train model** to create a voice model with the data you've uploaded.
-2. Select the neural training method for your model and target language.
-
-By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language (preview) for your voice model. Check the languages supported for custom neural voice and cross-lingual feature: [language for custom neural voice](language-support.md#custom-neural-voice).
-
-Training of custom neural voices isn't free. Check the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for details. However, if you have statistical parametric or concatenative voice models deployed before March 31, 2021 with S0 Speech resources, free neural training credits are offered to your Azure subscription, and you can train 5 different versions of neural voices for free.
+1. Select the neural training method for your model and target language. By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language for your voice model. For more information, see [language support for Custom Neural Voice](language-support.md#custom-neural-voice). Also see information about [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) for neural training.
-3. Next, choose the data you want to use for training, and specify a speaker file.
+1. Choose the data you want to use for training, and specify a speaker file.
->[!NOTE]
->- You need to select at least 300 utterances to create a custom neural voice.
->- To train a neural voice, you must specify a voice talent profile with the audio consent file provided of the voice talent acknowledging to use his/her speech data to train a custom neural voice model. Custom neural voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access here](https://aka.ms/customneural).
+ >[!NOTE]
+ >- To create a custom neural voice, select at least 300 utterances.
+ >- To train a neural voice, you must specify a voice talent profile. This profile must provide the audio consent file of the voice talent, acknowledging to use his or her speech data to train a custom neural voice model. Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply the access](https://aka.ms/customneural).
-4. Then, choose your test script.
+1. Choose your test script. Each training generates 100 sample audio files automatically, to help you test the model with a default script. You can also provide your own test script. The test script must exclude the filenames (the ID of each utterance). Otherwise, these IDs are spoken. Here's an example of how the utterances are organized in one .txt file:
-Each training will generate 100 sample audio files automatically to help you test the model with a default script. You can also provide your own test script as optional. The test script must exclude the filenames (the ID of each utterance), otherwise, these IDs will be spoken. Below is an example of how the utterances are organized in one .txt file:
+ ```
+ This is the waistline, and it's falling.
+ We have trouble scoring.
+ It was Janet Maslin.
+ ```
-```
-This is the waistline, and it's falling.
-We have trouble scoring.
-It was Janet Maslin.
-```
+ Each paragraph of the utterance results in a separate audio. If you want to combine all sentences into one audio, make them a single paragraph.
-Each paragraph of the utterance will result in a separate audio. If you want to combine all sentences into one audio, make them in one paragraph.
+ >[!NOTE]
+ >- The test script must be a .txt file, less than 1 MB. Supported encoding formats include ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
+ >- The generated audios are a combination of the uploaded test script and the default test script.
->[!NOTE]
->- The test script must be a txt file, less than 1 MB. Supported encoding format includes ANSI/ASCII, UTF-8, UTF-8-BOM, UTF-16-LE, or UTF-16-BE.
->- The generated audios are a combination of the uploaded test script and the default test script.
+1. Enter a **Name** and **Description** to help you identify this model. Choose a name carefully. The name you enter here will be the name you use to specify the voice in your request for speech synthesis as part of the SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
-5. Enter a **Name** and **Description** to help you identify this model.
+ A common use of the **Description** field is to record the names of the data that you used to create the model.
-Choose a name carefully. The name you enter here will be the name you use to specify the voice in your request for speech synthesis as part of the SSML input. Only letters, numbers, and a few punctuation characters such as -, _, and (', ') are allowed. Use different names for different neural voice models.
+1. Review the settings, then select **Submit** to start training the model.
-A common use of the **Description** field is to record the names of the data that were used to create the model.
+ > [!NOTE]
+ > Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
-6. Review the settings, then select **Submit** to start training the model.
+ The **Train model** table displays a new entry that corresponds to this newly created model. The table also displays the status: processing, succeeded, or failed. The status reflects the process of converting your data to a voice model, as shown in this table:
-> [!NOTE]
-> Duplicate audio names will be removed from the training. Make sure the data you select don't contain the same audio names across multiple .zip files.
-
-The **Train model** table displays a new entry that corresponds to this newly created model. The table also displays the status: Processing, Succeeded, Failed.
+ | State | Meaning |
+ | -- | - |
+ | Processing | Your voice model is being created. |
+ | Succeeded | Your voice model has been created and can be deployed. |
+ | Failed | Your voice model has failed in training. The cause of the failure might be, for example, unseen data problems or network issues. |
-The status that's shown reflects the process of converting your data to a voice model, as shown here.
+ Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice.
-| State | Meaning |
-| -- | - |
-| Processing | Your voice model is being created. |
-| Succeeded | Your voice model has been created and can be deployed. |
-| Failed | Your voice model has been failed in training because of many reasons, for example, unseen data problems or network issues. |
+ > [!NOTE]
+ > Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
-Training duration varies depending on how much data you're training. It takes about 40 compute hours on average to train a custom neural voice.
+1. After you finish training the model successfully, you can review the model details.
-> [!NOTE]
-> Standard subscription (S0) users can train three voices simultaneously. If you reach the limit, wait until at least one of your voice models finishes training, and then try again.
+After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
-7. After you finish training the model successfully, you can review the model details.
+The quality of the voice depends on many factors, such as:
-After your voice model is successfully built, you can use the generated sample audio files to test it before deploying it for use.
+- The size of the training data.
+- The quality of the recording.
+- The accuracy of the transcript file.
+- How well the recorded voice in the training data matches the personality of the designed voice for your intended use case.
-The quality of the voice depends on many factors, including the size of the training data, the quality of the recording, the accuracy of the transcript file, how well the recorded voice in the training data matches the personality of the designed voice for your intended use case, and more. [Check here to learn more about the capabilities and limits of our technology and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
> [!NOTE]
-> Custom neural voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other different regions. Check the regions supported for custom neural voice: [regions for custom neural voice](regions.md#text-to-speech).
+> Custom Neural Voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
-## Create and use a custom neural voice endpoint
+## Create and use a Custom Neural Voice endpoint
-After you've successfully created and tested your voice model, you deploy it in a custom Text-to-Speech endpoint. You then use this endpoint in place of the usual endpoint when making Text-to-Speech requests through the REST API. Your custom endpoint can be called only by the subscription that you've used to deploy the model.
+After you've successfully created and tested your voice model, you deploy it in a custom text-to-speech endpoint. Use this endpoint instead of the usual endpoint when you're making text-to-speech requests through the REST API. The subscription that you've used to deploy the model is the only one that can call your custom endpoint.
-You can do the following to create a custom neural voice endpoint.
+To create a Custom Neural Voice endpoint:
-1. Select **Deploy model** > **Deploy model**.
+1. On the **Deploy model** tab, select **Deploy model**.
1. Enter a **Name** and **Description** for your custom endpoint.
-1. Select a voice model you would like to associate with this endpoint.
+1. Select a voice model that you want to associate with this endpoint.
1. Select **Deploy** to create your endpoint.
-After you've selected the **Deploy** button, you'll see an entry for your new endpoint. It may take a few minutes for the Speech service to deploy the new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-You can **Suspend** and **Resume** your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL will be kept the same so you don't need to change your code in your apps.
+You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update. > [!NOTE] >- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of TTS service.
+>- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
-After your endpoint is deployed, the endpoint name appears as a link. Click the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
-The custom endpoint is functionally identical to the standard endpoint that's used for Text-to-Speech requests. For more information, see [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
+The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
-We also provide an online tool, [Audio Content Creation](https://speech.microsoft.com/audiocontentcreation), that allows you to fine-tune their audio output using a friendly UI.
+[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
## Next steps
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events > [!NOTE]
-> Viseme events are only available for `en-US` English (United States) [neural voices](language-support.md#text-to-speech) for now.
+> At this time, viseme events are available only for English (US) [neural voices](language-support.md#text-to-speech).
-A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word. Each viseme depicts the key facial poses for a specific set of phonemes.
+A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when a person speaks a word. Each viseme depicts the key facial poses for a specific set of phonemes.
-Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthetic speech. For example, you can:
+You can use visemes to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For example, you can:
* Create an animated virtual voice assistant for intelligent kiosks, building multi-mode integrated services for your customers. * Build immersive news broadcasts and improve audience experiences with natural face and mouth movements. * Generate more interactive gaming avatars and cartoon characters that can speak with dynamic content.
- * Make more effective language teaching videos that help language learners to understand the mouth behavior of each word and phoneme.
+ * Make more effective language teaching videos that help language learners understand the mouth behavior of each word and phoneme.
* People with hearing impairment can also pick up sounds visually and "lip-read" speech content that shows visemes on an animated face.
-See [the introduction video](https://youtu.be/ui9XT47uwxs) of the viseme.
+For more information about visemes, view this [introductory video](https://youtu.be/ui9XT47uwxs).
> [!VIDEO https://www.youtube.com/embed/ui9XT47uwxs]
-## Azure neural TTS can produce visemes with speech
+## Azure Neural TTS can produce visemes with speech
-A neural voice turns input text or SSML (Speech Synthesis Markup Language) into synthesized speech. Speech audio output can be accompanied by viseme IDs and their offset timestamps. Each viseme ID specifies a specific pose in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
+Neural Text-to-SpeechΓÇ»(Neural TTS) turns input text or SSML (Speech Synthesis Markup Language) into lifelike synthesized speech. Speech audio output can be accompanied by viseme IDs and their offset timestamps. Each viseme ID specifies a specific pose in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme. Using a 2D or 3D rendering engine, you can use these viseme events to animate your avatar.
-The overall workflow of viseme is depicted in the flowchart below.
+The overall workflow of viseme is depicted in the following flowchart:
-![The overall workflow of viseme](media/text-to-speech/viseme-structure.png)
+![Diagram of the overall workflow of viseme.](media/text-to-speech/viseme-structure.png)
-| Parameter | Description |
+*Viseme ID* and *audio offset output* are described in the following table:
+
+| Visme&nbsp;element | Description |
|--|-|
-| Viseme ID | Integer number that specifies a viseme. In English (United States), we offer 22 different visemes to depict the mouth shapes for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes. Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as `s` and `z`. See the [mapping table between Viseme ID and phonemes](#map-phonemes-to-visemes). |
+| Viseme ID | An integer number that specifies a viseme.<br>For English (US), we offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There is no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes). |
| Audio offset | The start time of each viseme, in ticks (100 nanoseconds). |
+| | |
## Get viseme events with the Speech SDK
-To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in Speech SDK.
-The following snippet shows how to subscribe to the viseme event.
+To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in the Speech SDK.
+
+The following snippet shows how to subscribe to the viseme event:
::: zone pivot="programming-language-csharp"
Here is an example of the viseme output.
(Viseme), Viseme ID: 13, Audio offset: 2350ms. ```
-After obtaining the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate the characters.
+After you obtain the viseme output, you can use these events to drive character animation. You can build your own characters and automatically animate them.
-For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position. With temporal tags provided in viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, below illustration shows a red lip character designed for language learning.
+For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position. With temporal tags that are provided in a viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, the following illustration shows a red-lipped character that's designed for language learning.
-![2D render example](media/text-to-speech/viseme-demo-2D.png)
+![Screenshot showing a 2D rendering example of four red-lipped mouths, each representing a different viseme ID that corresponds to a phoneme.](media/text-to-speech/viseme-demo-2D.png)
For 3D characters, think of the characters as string puppets. The puppet master pulls the strings from one state to another and the laws of physics do the rest and drive the puppet to move fluidly. The viseme output acts as a puppet master to provide an action timeline. The animation engine defines the physical laws of action. By interpolating frames with easing algorithms, the engine can further generate high-quality animations. ## Map phonemes to visemes
-Visemes vary by language. Each language has a set of visemes that correspond to its specific phonemes. The following table shows the correspondence between International Phonetic Alphabet (IPA) phonemes and viseme IDs for English (United States).
+Visemes vary by language. Each language has a set of visemes that correspond to its specific phonemes. The following table shows the correspondence between International Phonetic Alphabet (IPA) phonemes and viseme IDs for English (US).
| IPA | Example | Viseme ID | |--||--|
cognitive-services Setup Platform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/quickstarts/setup-platform.md
Title: 'Quickstart: Set up development environment'
+ Title: 'Quickstart: Set up the development environment'
-description: In this quickstart, you'll learn how to install the Speech SDK for your preferred platform and programming language combination.
+description: In this quickstart, you'll learn how to install the Speech SDK for your preferred combination of platform and programming language.
zone_pivot_groups: programming-languages-speech-services-one-nomore
-# Quickstart: Setup development environment
+# Quickstart: Set up the development environment
::: zone pivot="programming-language-csharp"
zone_pivot_groups: programming-languages-speech-services-one-nomore
[!INCLUDE [browser](../includes/quickstarts/platform/javascript-browser.md)]
-#### [NodeJS](#tab/nodejs)
+#### [Node.js](#tab/nodejs)
[!INCLUDE [node](../includes/quickstarts/platform/javascript-node.md)]
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
Title: Speaker Recognition overview - Speech service
+ Title: Speaker recognition overview - Speech service
-description: Speaker Recognition provides algorithms that verify and identify speakers by their unique voice characteristics using voice biometry. Speaker Recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. This article is an overview of the benefits and capabilities of the Speaker Recognition service.
+description: Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics, by using voice biometry. Speaker recognition is used to answer the question ΓÇ£who is speaking?ΓÇ¥. This article is an overview of the benefits and capabilities of the speaker recognition feature.
keywords: speaker recognition, voice biometry
-# What is Speaker Recognition?
+# What is speaker recognition?
-Speaker Recognition can help determine who is speaking in an audio clip. The service can verify and identify speakers by their unique voice characteristics using voice biometry.
+Speaker recognition can help determine who is speaking in an audio clip. The service can verify and identify speakers by their unique voice characteristics, by using voice biometry.
-You provide audio training data for a single speaker, which creates an enrollment profile based on the unique characteristics of the speaker's voice. You can then cross-check audio voice samples against this profile to verify that the speaker is the same person (speaker verification), or cross-check audio voice samples against a *group* of enrolled speaker profiles to see if it matches any profile in the group (speaker identification).
+You provide audio training data for a single speaker, which creates an enrollment profile based on the unique characteristics of the speaker's voice. You can then cross-check audio voice samples against this profile to verify that the speaker is the same person (speaker verification). You can also cross-check audio voice samples against a *group* of enrolled speaker profiles to see if it matches any profile in the group (speaker identification).
> [!IMPORTANT]
-> Microsoft limits access to Speaker Recognition. You can apply for access through the [Azure Cognitive Services Speaker Recognition Limited Access Review](https://aka.ms/azure-speaker-recognition). For more information, please visit [Limited access for Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition).
+> Microsoft limits access to speaker recognition. You can apply for access through the [Azure Cognitive Services speaker recognition limited access review](https://aka.ms/azure-speaker-recognition). For more information, see [Limited access for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition).
-## Speaker Verification
+## Speaker verification
-Speaker Verification streamlines the process of verifying an enrolled speaker identity with either passphrases or free-form voice input. For example, you can use it for customer identity verification in call centers or contact-less facility access.
+Speaker verification streamlines the process of verifying an enrolled speaker identity with either passphrases or free-form voice input. For example, you can use it for customer identity verification in call centers or contactless facility access.
-### How does Speaker Verification work?
+### How does speaker verification work?
+The following flowchart provides a visual of how this works:
-Speaker verification can be either text-dependent or text-independent. **Text-dependent** verification means speakers need to choose the same passphrase to use during both enrollment and verification phases. **Text-independent** verification means speakers can speak in everyday language in the enrollment and verification phrases.
-For **text-dependent** verification, the speaker's voice is enrolled by saying a passphrase from a set of predefined phrases. Voice features are extracted from the audio recording to form a unique voice signature, while the chosen passphrase is also recognized. Together, the voice signature and the passphrase are used to verify the speaker.
+Speaker verification can be either text-dependent or text-independent. *Text-dependent* verification means that speakers need to choose the same passphrase to use during both enrollment and verification phases. *Text-independent* verification means that speakers can speak in everyday language in the enrollment and verification phrases.
-**Text-independent** verification has no restrictions on what the speaker says during enrollment, besides the initial activation phrase to activate the enrollment. It doesn't have any restrictions on the audio sample to be verified, as it only extracts voice features to score similarity.
+For text-dependent verification, the speaker's voice is enrolled by saying a passphrase from a set of predefined phrases. Voice features are extracted from the audio recording to form a unique voice signature, and the chosen passphrase is also recognized. Together, the voice signature and the passphrase are used to verify the speaker.
-The APIs are not intended to determine whether the audio is from a live person or an imitation/recording of an enrolled speaker.
+Text-independent verification has no restrictions on what the speaker says during enrollment, besides the initial activation phrase to activate the enrollment. It doesn't have any restrictions on the audio sample to be verified, because it only extracts voice features to score similarity.
-## Speaker Identification
+The APIs aren't intended to determine whether the audio is from a live person, or from an imitation or recording of an enrolled speaker.
-Speaker Identification is used to determine an unknown speakerΓÇÖs identity within a group of enrolled speakers. Speaker Identification enables you to attribute speech to individual speakers, and unlock value from scenarios with multiple speakers, such as:
+## Speaker identification
-* Support solutions for remote meeting productivity
-* Build multi-user device personalization
+Speaker identification helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers. Speaker identification enables you to attribute speech to individual speakers, and unlock value from scenarios with multiple speakers, such as:
-### How does Speaker Identification work?
+* Supporting solutions for remote meeting productivity.
+* Building multi-user device personalization.
-Enrollment for speaker identification is **text-independent**, which means that there are no restrictions on what the speaker says in the audio, besides the initial activation phrase to activate the enrollment. Similar to Speaker Verification, the speaker's voice is recorded in the enrollment phase, and the voice features are extracted to form a unique voice signature. In the identification phase, the input voice sample is compared to a specified list of enrolled voices (up to 50 in each request).
+### How does speaker identification work?
+
+Enrollment for speaker identification is text-independent. There are no restrictions on what the speaker says in the audio, besides the initial activation phrase to activate the enrollment. Similar to speaker verification, the speaker's voice is recorded in the enrollment phase, and the voice features are extracted to form a unique voice signature. In the identification phase, the input voice sample is compared to a specified list of enrolled voices (up to 50 in each request).
## Data security and privacy
-Speaker enrollment data is stored in a secured system, including the speech audio for enrollment and the voice signature features. The speech audio for enrollment is only used when the algorithm is upgraded, and the features need to be extracted again. The service does not retain the speech recording or the extracted voice features that are sent to the service during the recognition phase.
+Speaker enrollment data is stored in a secured system, including the speech audio for enrollment and the voice signature features. The speech audio for enrollment is only used when the algorithm is upgraded, and the features need to be extracted again. The service doesn't retain the speech recording or the extracted voice features that are sent to the service during the recognition phase.
You control how long data should be retained. You can create, update, and delete enrollment data for individual speakers through API calls. When the subscription is deleted, all the speaker enrollment data associated with the subscription will also be deleted.
-As with all of the Cognitive Services resources, developers who use the Speaker Recognition service must be aware of Microsoft's policies on customer data. You should ensure that you have received the appropriate permissions from the users for Speaker Recognition. You can find more details in [Data and privacy for Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition). For more information, see the [Cognitive Services page](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/) on the Microsoft Trust Center.
+As with all of the Cognitive Services resources, developers who use the speaker recognition feature must be aware of Microsoft policies on customer data. You should ensure that you have received the appropriate permissions from the users. You can find more details in [Data and privacy for speaker recognition](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition). For more information, see the [Cognitive Services page](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/) on the Microsoft Trust Center.
## Common questions and solutions | Question | Solution | ||-|
-| What scenarios can Speaker Recognition be used for? | Call center customer verification, voice-based patient check-in, meeting transcription, multi-user device personalization|
-| What is the difference between Identification and Verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, or **enrolled** voice.|
-| What's the difference between text-dependent and text-independent verification? | Text-dependent verification requires a specific pass-phrase for both enrollment and recognition. Text-independent verification requires a longer voice sample that must start with a particular activation phrase for enrollment, but anything can be spoken, including during recognition.|
-| What languages are supported? | See [Speaker recognition language support](language-support.md#speaker-recognition) |
-| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speaker-recognition)|
-| What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV |
-| **Accept** and **Reject** responses aren't accurate, how do you tune the threshold? | Since the optimal threshold varies highly with scenarios, the service decides whether to accept or reject based on a default threshold of 0.5. You should override the default decision and fine tune the result based on your own scenario. |
+| What situations am I most likely to use speaker recognition? | Good examples include call center customer verification, voice-based patient check-in, meeting transcription, and multi-user device personalization.|
+| What's the difference between identification and verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, *enrolled* voice.|
+| What languages are supported? | See [Speaker recognition language support](language-support.md#speaker-recognition). |
+| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speaker-recognition).|
+| What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV. |
| Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
-| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#deleting-voice-profile-enrollments). Recognition audio samples are not retained or stored. |
+| What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#deleting-voice-profile-enrollments). Recognition audio samples aren't retained or stored. |
## Next steps > [!div class="nextstepaction"]
-> [Speaker Recognition quickstart](./get-started-speaker-recognition.md)
+> [Speaker recognition quickstart](./get-started-speaker-recognition.md)
cognitive-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-configuration.md
This setting can be found in the following place:
| Required | Name | Data type | Description | | -- | - | | -- |
-| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](speech-container-howto.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
+| Yes | `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](speech-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../cognitive-services-custom-subdomains.md). |
## Eula setting
Replace {_argument_name_} with your own values:
| Placeholder | Value | Format or example | | -- | -- | -- | | **{API_KEY}** | The endpoint key of the `Speech` resource on the Azure `Speech` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` |
-| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Speech` Overview page. | See [gathering required parameters](speech-container-howto.md#gathering-required-parameters) for explicit examples. |
+| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Speech` Overview page. | See [gather required parameters](speech-container-howto.md#gather-required-parameters) for explicit examples. |
[!INCLUDE [subdomains-note](../../../includes/cognitive-services-custom-subdomains-note.md)]
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
keywords: on-premises, Docker, container
-# Install and run Docker containers for the Speech service APIs
+# Install and run Docker containers for the Speech service APIs
-Containers enable you to run _some_ of the Speech service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Speech container.
+By using containers, you can run _some_ of the Azure Cognitive Services Speech service APIs in your own environment. Containers are great for specific security and data governance requirements. In this article, you'll learn how to download, install, and run a Speech container.
-Speech containers enable customers to build a speech application architecture that is optimized for both robust cloud capabilities and edge locality. There are several containers available, which use the same [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) as the cloud-based Azure Speech Services.
+With Speech containers, you can build a speech application architecture that's optimized for both robust cloud capabilities and edge locality. Several containers are available, which use the same [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) as the cloud-based Azure Speech services.
> [!IMPORTANT]
-> We retired the standard speech synthesis voices and text-to-speech container on August 31st 2021. Consider migrating your applications to use the Neural text-to-speech container instead. [Follow these steps](./how-to-migrate-to-prebuilt-neural-voice.md) for more information on updating your application.
+> We retired the standard speech synthesis voices and text-to-speech container on August 31, 2021. Consider migrating your applications to use the neural text-to-speech container instead. For more information on updating your application, see [Migrate from standard voice to prebuilt neural voice](./how-to-migrate-to-prebuilt-neural-voice.md).
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.0.0 | Generally Available |
-| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally Available |
-| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally Available |
-| Speech Language Identification | Detect the language spoken in audio files. | 1.5.0 | preview |
-| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.12.0 | Generally Available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.0.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally available |
+| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally available |
+| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 1.12.0 | Generally available |
## Prerequisites > [!IMPORTANT]
-> * To use the speech containers you must submit an online request, and have it approved. See the **Request approval to the run the container** section below for more information.
-> * *Generally Available* containers meet Microsoft's stability and support requirements. Containers in *Preview* are still under development.
+> * To use the Speech containers, you must submit an online request and have it approved. For more information, see the "Request approval to run the container" section.
+> * *Generally available* containers meet Microsoft's stability and support requirements. Containers in *preview* are still under development.
-You must meet the following prerequisites before using Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+You must meet the following prerequisites before you use Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. You need:
-* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
* On Windows, Docker must also be configured to support Linux containers. * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). * A <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). [!INCLUDE [Gathering required parameters](../containers/includes/container-gathering-required-parameters.md)] - ## Host computer requirements and recommendations [!INCLUDE [Host Computer requirements](../../../includes/cognitive-services-containers-host-computer.md)] ### Container requirements and recommendations
-The following table describes the minimum and recommended allocation of resources for each Speech container.
+The following table describes the minimum and recommended allocation of resources for each Speech container:
| Container | Minimum | Recommended | |--||-| | Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
-| Custom Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
+| Custom speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
| Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
-| Speech Language Identification | 1 core, 1-GB memory | 1 core, 1-GB memory |
-| Neural Text-to-speech | 6 core, 12-GB memory | 8 core, 16-GB memory |
+| Speech language identification | 1 core, 1-GB memory | 1 core, 1-GB memory |
+| Neural text-to-speech | 6 core, 12-GB memory | 8 core, 16-GB memory |
-* Each core must be at least 2.6 gigahertz (GHz) or faster.
+Each core must be at least 2.6 gigahertz (GHz) or faster.
Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command. > [!NOTE]
-> The minimum and recommended are based off of Docker limits, *not* the host machine resources. For example, speech-to-text containers memory map portions of a large language model, and it is *recommended* that the entire file fits in memory, which is an additional 4-6 GB. Also, the first run of either container may take longer, since models are being paged into memory.
+> The minimum and recommended allocations are based on Docker limits, *not* the host machine resources. For example, speech-to-text containers memory map portions of a large language model. We recommend that the entire file should fit in memory, which is an additional 4 to 6 GB. Also, the first run of either container might take longer because models are being paged into memory.
### Advanced Vector Extension support
-The **host** is the computer that runs the docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
+The *host* is the computer that runs the Docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
```console grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detect
> [!WARNING] > The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
-## Request approval to the run the container
+## Request approval to run the container
-Fill out and submit the [request form](https://aka.ms/csgate) to request access to the container.
+Fill out and submit the [request form](https://aka.ms/csgate) to request access to the container.
[!INCLUDE [Request access to public preview](../../../includes/cognitive-services-containers-request-access.md)]
+## Get the container image with docker pull
-## Get the container image with `docker pull`
-
-Container images for Speech are available in the following Container Registry.
+Container images for Speech are available in the following container registry.
# [Speech-to-text](#tab/stt)
Container images for Speech are available in the following Container Registry.
|--|| | Speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest` |
-# [Custom Speech-to-text](#tab/cstt)
+# [Custom speech-to-text](#tab/cstt)
| Container | Repository | |--||
-| Custom Speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
+| Custom speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
# [Text-to-speech](#tab/tts)
Container images for Speech are available in the following Container Registry.
|--|| | Text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest` |
-# [Neural Text-to-speech](#tab/ntts)
+# [Neural text-to-speech](#tab/ntts)
| Container | Repository | |--||
-| Neural Text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest` |
+| Neural text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest` |
-# [Speech Language Identification](#tab/lid)
+# [Speech language identification](#tab/lid)
> [!TIP]
-> To get the most useful results, we recommend using the Speech language identification container with the Speech-to-text or Custom speech-to-text containers.
+> To get the most useful results, use the Speech language identification container with the speech-to-text or custom speech-to-text containers.
| Container | Repository | |--||
-| Speech Language Identification | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
+| Speech language identification | `mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest` |
***
Container images for Speech are available in the following Container Registry.
# [Speech-to-text](#tab/stt)
-#### Docker pull for the Speech-to-text container
+#### Docker pull for the speech-to-text container
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container registry.
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
```Docker docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest ``` > [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale. For additional locales see [Speech-to-text locales](#speech-to-text-locales).
+> The `latest` tag pulls the `en-US` locale. For additional locales, see [Speech-to-text locales](#speech-to-text-locales).
#### Speech-to-text locales
-All tags, except for `latest` are in the following format and are case-sensitive:
+All tags, except for `latest`, are in the following format and are case sensitive:
``` <major>.<minor>.<patch>-<platform>-<locale>-<prerelease>
The following tag is an example of the format:
2.6.0-amd64-en-us ```
-For all of the supported locales of the **speech-to-text** container, please see [Speech-to-text image tags](../containers/container-image-tags.md#speech-to-text).
+For all the supported locales of the speech-to-text container, see [Speech-to-text image tags](../containers/container-image-tags.md#speech-to-text).
-# [Custom Speech-to-text](#tab/cstt)
+# [Custom speech-to-text](#tab/cstt)
-#### Docker pull for the Custom Speech-to-text container
+#### Docker pull for the custom speech-to-text container
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container registry.
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
```Docker docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest
docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-spe
# [Text-to-speech](#tab/tts)
-#### Docker pull for the Text-to-speech container
+#### Docker pull for the text-to-speech container
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container registry.
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
```Docker docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest ``` > [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale and `ariarus` voice. For additional locales see [Text-to-speech locales](#text-to-speech-locales).
+> The `latest` tag pulls the `en-US` locale and `ariarus` voice. For more locales, see [Text-to-speech locales](#text-to-speech-locales).
#### Text-to-speech locales
-All tags, except for `latest` are in the following format and are case-sensitive:
+All tags, except for `latest`, are in the following format and are case sensitive:
``` <major>.<minor>.<patch>-<platform>-<locale>-<voice>-<prerelease>
The following tag is an example of the format:
1.8.0-amd64-en-us-ariarus ```
-For all of the supported locales and corresponding voices of the **text-to-speech** container, please see [Text-to-speech image tags](../containers/container-image-tags.md#text-to-speech).
+For all the supported locales and corresponding voices of the text-to-speech container, see [Text-to-speech image tags](../containers/container-image-tags.md#text-to-speech).
> [!IMPORTANT]
-> When constructing a *Text-to-speech* HTTP POST, the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, also known as the ["short name"](how-to-migrate-to-prebuilt-neural-voice.md). For example, the `latest` tag would have a voice name of `en-US-AriaRUS`.
+> When you construct a text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, which is also known as the [short name](how-to-migrate-to-prebuilt-neural-voice.md). For example, the `latest` tag would have a voice name of `en-US-AriaRUS`.
-# [Neural Text-to-speech](#tab/ntts)
+# [Neural text-to-speech](#tab/ntts)
-#### Docker pull for the Neural Text-to-speech container
+#### Docker pull for the neural text-to-speech container
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container registry.
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
```Docker docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech:latest ``` > [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale and `arianeural` voice. For additional locales see [Neural Text-to-speech locales](#neural-text-to-speech-locales).
+> The `latest` tag pulls the `en-US` locale and `arianeural` voice. For more locales, see [Neural text-to-speech locales](#neural-text-to-speech-locales).
-#### Neural Text-to-speech locales
+#### Neural text-to-speech locales
-All tags, except for `latest` are in the following format and are case-sensitive:
+All tags, except for `latest`, are in the following format and are case sensitive:
``` <major>.<minor>.<patch>-<platform>-<locale>-<voice>
The following tag is an example of the format:
1.3.0-amd64-en-us-arianeural ```
-For all of the supported locales and corresponding voices of the **neural text-to-speech** container, please see [Neural Text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech).
+For all the supported locales and corresponding voices of the neural text-to-speech container, see [Neural text-to-speech image tags](../containers/container-image-tags.md#neural-text-to-speech).
> [!IMPORTANT]
-> When constructing a *Neural Text-to-speech* HTTP POST, the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, also known as the ["short name"](language-support.md#prebuilt-neural-voices). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
+> When you construct a neural text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, which is also known as the [short name](language-support.md#prebuilt-neural-voices). For example, the `latest` tag would have a voice name of `en-US-AriaNeural`.
-# [Speech Language Identification](#tab/lid)
+# [Speech language identification](#tab/lid)
-#### Docker pull for the Speech Language Identification container
+#### Docker pull for the Speech language identification container
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container registry.
+Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
```Docker docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-detection:latest
docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-d
***
-## How to use the container
+## Use the container
-Once the container is on the [host computer](#host-computer-requirements-and-recommendations), use the following process to work with the container.
+After the container is on the [host computer](#host-computer-requirements-and-recommendations), use the following process to work with the container.
-1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
+1. [Run the container](#run-the-container-with-docker-run) with the required billing settings. More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are available.
1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
-## Run the container with `docker run`
+## Run the container with docker run
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gathering required parameters](#gathering-required-parameters) for details on how to get the `{Endpoint_URI}` and `{API_Key}` values. Additional [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are also available.
+Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. For more information on how to get the `{Endpoint_URI}` and `{API_Key}` values, see [Gather required parameters](#gather-required-parameters). More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are also available.
## Run the container in disconnected environments
-Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without Internet accessibility. See [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md) for more information.
-
+Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
# [Speech-to-text](#tab/stt)
-To run the Standard *Speech-to-text* container, execute the following `docker run` command.
+To run the standard speech-to-text container, execute the following `docker run` command:
```bash docker run --rm -it -p 5000:5000 --memory 4g --cpus 4 \
ApiKey={API_KEY}
This command:
-* Runs a *Speech-to-text* container from the container image.
-* Allocates 4 CPU cores and 4 gigabytes (GB) of memory.
+* Runs a *speech-to-text* container from the container image.
+* Allocates 4 CPU cores and 4 GB of memory.
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer. > [!NOTE]
-> Containers support compressed audio input to Speech SDK using GStreamer.
+> Containers support compressed audio input to the Speech SDK by using GStreamer.
> To install GStreamer in a container, > follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md). #### Diarization on the speech-to-text output
-Diarization is enabled by default. to get diarization in your response, use `diarize_speech_config.set_service_property`.
+
+Diarization is enabled by default. To get diarization in your response, use `diarize_speech_config.set_service_property`.
1. Set the phrase output format to `Detailed`. 2. Set the mode of diarization. The supported modes are `Identity` and `Anonymous`.
-```python
-diarize_speech_config.set_service_property(
- name='speechcontext-PhraseOutput.Format',
- value='Detailed',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
+
+ ```python
+ diarize_speech_config.set_service_property(
+ name='speechcontext-PhraseOutput.Format',
+ value='Detailed',
+ channel=speechsdk.ServicePropertyChannel.UriQueryParameter
+ )
+
+ diarize_speech_config.set_service_property(
+ name='speechcontext-phraseDetection.speakerDiarization.mode',
+ value='Identity',
+ channel=speechsdk.ServicePropertyChannel.UriQueryParameter
+ )
+ ```
-diarize_speech_config.set_service_property(
- name='speechcontext-phraseDetection.speakerDiarization.mode',
- value='Identity',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-```
-> [!NOTE]
-> "Identity" mode returns `"SpeakerId": "Customer"` or `"SpeakerId": "Agent"`.
-> "Anonymous" mode returns `"SpeakerId": "Speaker 1"` or `"SpeakerId": "Speaker 2"`
+ > [!NOTE]
+ > "Identity" mode returns `"SpeakerId": "Customer"` or `"SpeakerId": "Agent"`.
+ > "Anonymous" mode returns `"SpeakerId": "Speaker 1"` or `"SpeakerId": "Speaker 2"`.
+
+#### Analyze sentiment on the speech-to-text output
+Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example:
-#### Analyze sentiment on the speech-to-text output
-Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example
* `https://westus2.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment` * `https://localhost:5000/text/analytics/v3.0/sentiment` > [!NOTE]
-> The Language service `v3.0` API is not backward compatible with `v3.0-preview.1`. To get the latest sentiment feature support, use `v2.6.0` of the speech-to-text container image and Language service `v3.0`.
+> The Language service `v3.0` API isn't backward compatible with `v3.0-preview.1`. To get the latest sentiment feature support, use `v2.6.0` of the speech-to-text container image and Language service `v3.0`.
+
+Starting in v2.2.0 of the speech-to-text container, you can call the [sentiment analysis v3 API](../text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md) on the output. To call sentiment analysis, you'll need a Language service API resource endpoint. For example:
-Starting in v2.2.0 of the speech-to-text container, you can call the [sentiment analysis v3 API](../text-analytics/how-tos/text-analytics-how-to-sentiment-analysis.md) on the output. To call sentiment analysis, you will need a Language service API resource endpoint. For example:
* `https://westus2.api.cognitive.microsoft.com/text/analytics/v3.0-preview.1/sentiment` * `https://localhost:5000/text/analytics/v3.0-preview.1/sentiment`
-If you're accessing a Language service endpoint in the cloud, you will need a key. If you're running Language service features locally, you may not need to provide this.
+If you're accessing a Language service endpoint in the cloud, you'll need a key. If you're running Language service features locally, you might not need to provide this.
-The key and endpoint are passed to the Speech container as arguments, as in the following example.
+The key and endpoint are passed to the Speech container as arguments, as in the following example:
```bash docker run -it --rm -p 5000:5000 \
CloudAI:SentimentAnalysisSettings:SentimentAnalysisApiKey={SENTIMENT_APIKEY}
This command:
-* Performs the same steps as the command above.
-* Stores a Language service API endpoint and key, for sending sentiment analysis requests.
+* Performs the same steps as the preceding command.
+* Stores a Language service API endpoint and key, for sending sentiment analysis requests.
-#### Phraselist v2 on the speech-to-text output
+#### Phraselist v2 on the speech-to-text output
-Starting in v2.6.0 of the speech-to-text container, you can get the output with your own phrases - either the whole sentence, or phrases in the middle. For example *the tall man* in the following sentence:
+Starting in v2.6.0 of the speech-to-text container, you can get the output with your own phrases, either the whole sentence or phrases in the middle. For example, *the tall man* in the following sentence:
* "This is a sentence **the tall man** this is another sentence."
To configure a phrase list, you need to add your own phrases when you make the c
) ```
-If you have multiple phrases to add, call `.addPhrase()` for each phrase to add it to the phrase list.
-
+If you have multiple phrases to add, call `.addPhrase()` for each phrase to add it to the phrase list.
-# [Custom Speech-to-text](#tab/cstt)
+# [Custom speech-to-text](#tab/cstt)
-The *Custom Speech-to-text* container relies on a custom speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) using the [custom speech portal](https://speech.microsoft.com/customspeech).
+The custom speech-to-text container relies on a Custom Speech model. The custom model has to have been [trained](how-to-custom-speech-train-model.md) by using the [Custom Speech portal](https://speech.microsoft.com/customspeech).
-The custom speech **Model ID** is required to run the container. It can be found on the **Training** page of the custom speech portal. From the custom speech portal, navigate to the **Training** page and select the model.
+The custom speech **Model ID** is required to run the container. It can be found on the **Training** page of the Custom Speech portal. From the Custom Speech portal, go to the **Training** page and select the model.
<br>
-![Custom speech training page](media/custom-speech/custom-speech-model-training.png)
+![Screenshot that shows the Custom Speech training page.](media/custom-speech/custom-speech-model-training.png)
Obtain the **Model ID** to use as the argument to the `ModelId` parameter of the `docker run` command. <br>
-![Custom speech model details](media/custom-speech/custom-speech-model-details.png)
+![Screenshot that shows Custom Speech model details.](media/custom-speech/custom-speech-model-details.png)
The following table represents the various `docker run` parameters and their corresponding descriptions: | Parameter | Description | |||
-| `{VOLUME_MOUNT}` | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which docker uses to persist the custom model. For example, *C:\CustomSpeech* where the *C drive* is located on the host machine. |
-| `{MODEL_ID}` | The Custom Speech **Model ID** from the **Training** page of the custom speech portal. |
-| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [gathering required parameters](#gathering-required-parameters). |
-| `{API_KEY}` | The API key is required. For more information, see [gathering required parameters](#gathering-required-parameters). |
+| `{VOLUME_MOUNT}` | The host computer [volume mount](https://docs.docker.com/storage/volumes/), which Docker uses to persist the custom model. An example is *C:\CustomSpeech* where the C drive is located on the host machine. |
+| `{MODEL_ID}` | The custom speech **Model ID** from the **Training** page of the Custom Speech portal. |
+| `{ENDPOINT_URI}` | The endpoint is required for metering and billing. For more information, see [Gather required parameters](#gather-required-parameters). |
+| `{API_KEY}` | The API key is required. For more information, see [Gather required parameters](#gather-required-parameters). |
-To run the *Custom Speech-to-text* container, execute the following `docker run` command:
+To run the custom speech-to-text container, execute the following `docker run` command:
```bash docker run --rm -it -p 5000:5000 --memory 4g --cpus 4 \
ApiKey={API_KEY}
This command:
-* Runs a *Custom Speech-to-text* container from the container image.
-* Allocates 4 CPU cores and 4 gigabytes (GB) of memory.
-* Loads the *Custom Speech-to-Text* model from the volume input mount, for example *C:\CustomSpeech*.
+* Runs a custom speech-to-text container from the container image.
+* Allocates 4 CPU cores and 4 GB of memory.
+* Loads the custom speech-to-text model from the volume input mount, for example, *C:\CustomSpeech*.
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Downloads the model given the `ModelId` (if not found on the volume mount). * If the custom model was previously downloaded, the `ModelId` is ignored. * Automatically removes the container after it exits. The container image is still available on the host computer.
+#### Base model download on the custom speech-to-text container
-#### Base model download on the custom speech-to-text container
-Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale=<locale>`. This option will give you a list of available base models on that locale under your billing account. For example:
+Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale=<locale>`. This option gives you a list of available base models on that locale under your billing account. For example:
```bash docker run --rm -it \
ApiKey={API_KEY}
This command:
-* Runs a *Custom Speech-to-text* container from the container image.
-* Check and return the available base models of the target locale.
+* Runs a custom speech-to-text container from the container image.
+* Checks and returns the available base models of the target locale.
The output gives you a list of base models with the information locale, model ID, and creation date time. You can use the model ID to download and use the specific base model you prefer. For example: ```
Checking available base model for en-us
2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us ```
-#### Custom pronunciation on the custom speech-to-text container
-Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation result in the output. All you need to do is to have your own custom pronunciation rules set up in your custom model and mount the model to custom-speech-to-text container.
+#### Custom pronunciation on the custom speech-to-text container
+Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container.
# [Text-to-speech](#tab/tts)
-To run the Standard *Text-to-speech* container, execute the following `docker run` command.
+To run the standard text-to-speech container, execute the following `docker run` command:
```bash docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
ApiKey={API_KEY}
This command:
-* Runs a Standard *Text-to-speech* container from the container image.
-* Allocates 1 CPU core and 2 gigabytes (GB) of memory.
+* Runs a standard text-to-speech container from the container image.
+* Allocates 1 CPU core and 2 GB of memory.
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
-# [Neural Text-to-speech](#tab/ntts)
+# [Neural text-to-speech](#tab/ntts)
-To run the *Neural Text-to-speech* container, execute the following `docker run` command.
+To run the neural text-to-speech container, execute the following `docker run` command:
```bash docker run --rm -it -p 5000:5000 --memory 12g --cpus 6 \
ApiKey={API_KEY}
This command:
-* Runs a *Neural Text-to-speech* container from the container image.
-* Allocates 6 CPU cores and 12 gigabytes (GB) of memory.
+* Runs a neural text-to-speech container from the container image.
+* Allocates 6 CPU cores and 12 GB of memory.
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
-# [Speech Language Identification](#tab/lid)
+# [Speech language identification](#tab/lid)
-To run the *Speech Language Identification* container, execute the following `docker run` command.
+To run the Speech language identification container, execute the following `docker run` command:
```bash docker run --rm -it -p 5003:5003 --memory 1g --cpus 1 \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY} ```
-This command:
+This command:
-* Runs a speech language-detection container from the container image. Currently you will not be charged for running this image.
-* Allocates 1 CPU cores and 1 gigabyte (GB) of memory.
+* Runs a Speech language-detection container from the container image. Currently, you won't be charged for running this image.
+* Allocates 1 CPU core and 1 GB of memory.
* Exposes TCP port 5003 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer.
-If you want to run this container with the speech-to-text container, you can use this [Docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this Docker Run command to execute `speech-to-text-with-languagedetection-client`.
+If you want to run this container with the speech-to-text container, you can use this [docker image](https://hub.docker.com/r/antsu/on-prem-client). After both containers have been started, use this `docker run` command to execute `speech-to-text-with-languagedetection-client`:
```Docker docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-text-with-languagedetection-client ./audio/LanguageDetection_en-us.wav --host localhost --lport 5003 --sport 5000 ```
-> [!NOTE]
-> Increasing the number of concurrent calls can impact reliability and latency. For language identification, we recommend a maximum of 4 concurrent calls using 1 CPU with and 1GB of memory. For hosts with 2 CPUs and 2GB of memory, we recommend a maximum of 6 concurrent calls.
+Increasing the number of concurrent calls can affect reliability and latency. For language identification, we recommend a maximum of four concurrent calls using 1 CPU with 1 GB of memory. For hosts with 2 CPUs and 2 GB of memory, we recommend a maximum of six concurrent calls.
*** > [!IMPORTANT]
-> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
+> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container. Otherwise, the container won't start. For more information, see [Billing](#billing).
## Query the container's prediction endpoint
docker run --rm -v ${HOME}:/root -ti antsu/on-prem-client:latest ./speech-to-tex
| Containers | SDK Host URL | Protocol | |--|--|--|
-| Standard Speech-to-text and Custom Speech-to-text | `ws://localhost:5000` | WS |
-| Text-to-speech (including Standard and Neural), Speech Language identification | `http://localhost:5000` | HTTP |
+| Standard speech-to-text and custom speech-to-text | `ws://localhost:5000` | WS |
+| Text-to-speech (including standard and neural), Speech language identification | `http://localhost:5000` | HTTP |
-For more information on using WSS and HTTPS protocols, see [container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security).
+For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security).
-### Speech-to-text (Standard and Custom)
+### Speech-to-text (standard and custom)
[!INCLUDE [Query Speech-to-text container endpoint](includes/speech-to-text-container-query-endpoint.md)] #### Analyze sentiment If you provided your Language service API credentials [to the container](#analyze-sentiment-on-the-speech-to-text-output), you can use the Speech SDK to send speech recognition requests with sentiment analysis. You can configure the API responses to use either a *simple* or *detailed* format.+ > [!NOTE]
-> v1.13 of the Speech Service Python SDK has an identified issue with sentiment analysis. Please use v1.12.x or earlier if you're using sentiment analysis in the Speech Service Python SDK.
+> v1.13 of the Speech Service Python SDK has an identified issue with sentiment analysis. Use v1.12.x or earlier if you're using sentiment analysis in the Speech Service Python SDK.
# [Simple format](#tab/simple-format)
speech_config.set_service_property(
) ```
-`Simple.Extensions` will return the sentiment result in root layer of the response.
+`Simple.Extensions` returns the sentiment result in the root layer of the response.
```json {
speech_config.set_service_property(
) ```
-`Detailed.Extensions` provides sentiment result in the root layer of the response. `Detailed.Options` provides the result in `NBest` layer of the response. They can be used separately or together.
+`Detailed.Extensions` provides the sentiment result in the root layer of the response. `Detailed.Options` provides the result in the `NBest` layer of the response. They can be used separately or together.
```json {
speech_config.set_service_property(
) ```
-### Text-to-speech (Standard and Neural)
+### Text-to-speech (standard and neural)
[!INCLUDE [Query Text-to-speech container endpoint](includes/text-to-speech-container-query-endpoint.md)]
speech_config.set_service_property(
If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-You can have this container and a different Azure Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
+You can have this container and a different Cognitive Services container running on the HOST together. You also can have multiple containers of the same Cognitive Services container running.
[!INCLUDE [Validate container is running - Container API documentation](../../../includes/cognitive-services-containers-api-documentation.md)]
You can have this container and a different Azure Cognitive Services container r
## Troubleshooting
-When starting or running the container, you may experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so will allow the container to generate log files that are helpful when troubleshooting issues.
+When you start or run the container, you might experience issues. Use an output [mount](speech-container-configuration.md#mount-settings) and enable logging. Doing so allows the container to generate log files that are helpful when you troubleshoot issues.
[!INCLUDE [Cognitive Services FAQ note](../containers/includes/cognitive-services-faq-note.md)] [!INCLUDE [Diagnostic container](../containers/includes/diagnostics-container.md)] - ## Billing
-The Speech containers send billing information to Azure, using a *Speech* resource on your Azure account.
+The Speech containers send billing information to Azure by using a Speech resource on your Azure account.
[!INCLUDE [Container's Billing Settings](../../../includes/cognitive-services-containers-how-to-billing-info.md)]
For more information about these options, see [Configure containers](speech-cont
## Summary
-In this article, you learned concepts and workflow for downloading, installing, and running Speech containers. In summary:
+In this article, you learned concepts and workflow for how to download, install, and run Speech containers. In summary:
-* Speech provides four Linux containers for Docker, encapsulating various capabilities:
- * *Speech-to-text*
- * *Custom Speech-to-text*
- * *Text-to-speech*
- * *Custom Text-to-speech*
- * *Neural Text-to-speech*
- * *Speech Language Identification*
+* Speech provides four Linux containers for Docker that have various capabilities:
+ * Speech-to-text
+ * Custom speech-to-text
+ * Text-to-speech
+ * Custom text-to-speech
+ * Neural text-to-speech
+ * Speech language identification
* Container images are downloaded from the container registry in Azure. * Container images run in Docker.
-* Whether using the REST API (Text-to-speech only) or the SDK (Speech-to-text or Text-to-speech) you specify the host URI of the container.
-* You're required to provide billing information when instantiating a container.
+* Whether you use the REST API (text-to-speech only) or the SDK (speech-to-text or text-to-speech), you specify the host URI of the container.
+* You're required to provide billing information when you instantiate a container.
> [!IMPORTANT]
-> Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers do not send customer data (e.g., the image or text that is being analyzed) to Microsoft.
+> Cognitive Services containers aren't licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Cognitive Services containers don't send customer data (for example, the image or text that's being analyzed) to Microsoft.
## Next steps
-* Review [configure containers](speech-container-configuration.md) for configuration settings
-* Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md)
-* Use more [Cognitive Services containers](../cognitive-services-container-support.md)
+* Review [configure containers](speech-container-configuration.md) for configuration settings.
+* Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md).
+* Use more [Cognitive Services containers](../cognitive-services-container-support.md).
cognitive-services Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/text-analytics-for-health/how-to/configure-containers.md
This setting can be found in the following place:
|Required| Name | Data type | Description | |--||--|-|
-|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gathering required parameters](use-containers.md#gathering-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../../../cognitive-services-custom-subdomains.md). |
+|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](use-containers.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Cognitive Services](../../../cognitive-services-custom-subdomains.md). |
## Eula setting
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
Many countries and states have laws and regulations that apply to the recording
Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
+## Availability
+Currently, ACS Call Recording APIs are available in C# and Java.
+ ## Next steps Check out the [Call Recoding Quickstart](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn more.
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/closed-captions.md
Here are main scenarios where Closed Captions are useful:
- **Accessibility**. Scenarios when audio can't be heard, either because of a noisy environment, such as an airport, or because of an environment that must be kept quiet, such as a hospital. - **Inclusivity**. Closed Captioning was developed to aid hearing-impaired people, but it could be useful for a language proficiency as well.
+![closed captions](../media/call-closed-caption.png)
+ ## When to use Closed Captions - Closed Captions help maintain concentration and engagement, which can provide a better experience for viewers with learning disabilities, a language barrier, attention deficit disorder, or hearing impairment.
communication-services Quickstart Botframework Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md
+
+ Title: Quickstart - Add a bot to your chat app
+
+description: This quickstart shows you how to build chat experience with a bot using Communication Services Chat SDK and Bot Services.
++++ Last updated : 01/25/2022+++++
+# Quickstart: Add a bot to your chat app
+
+> [!IMPORTANT]
+> This functionality is in private preview, and restricted to a limited number of Azure Communication Services early adopters. You can [submit this form to request participation in the preview](https://forms.office.com/r/HBm8jRuuGZ) and we will review your scenario(s) and evaluate your participation in the preview.
+>
+> Private Preview APIs and SDKs are provided without a service-level agreement, and are not appropriate for production workloads and should only be used with test users and test data. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+
+In this quickstart, we'll learn how to build conversational AI experiences in our chat application using 'Communication Services-Chat' messaging channel available under Azure Bot Services. We'll create a bot using BotFramework SDK and learn how to integrate this bot into our chat application that is built using Communication Services Chat SDK.
+
+You'll learn how to:
+
+- [Create and deploy a bot](#step-1create-and-deploy-a-bot)
+- [Get an Azure Communication Services Resource](#step-2get-an-azure-communication-services-resource)
+- [Enable Communication Services' Chat Channel for the bot](#step-3enable-acs-chat-channel)
+- [Create a chat app and add bot as a participant](#step-4create-a-chat-app-and-add-bot-as-a-participant)
+- [Explore additional features available for bot](#more-things-you-can-do-with-bot)
+
+## Prerequisites
+- Create an Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- [Visual Studio (2019 and above)](https://visualstudio.microsoft.com/vs/)
+- [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) (Make sure to install version that corresponds with your visual studio instance, 32 vs 64 bit)
+++
+## Step 1 - Create and deploy a bot
+
+In order to use ACS chat as a channel in Azure Bot Service, the first step would be to deploy a bot. Please follow these steps:
+
+### Provision a bot service resource in Azure
+
+ 1. Click on create a resource option in Azure portal.
+
+ :::image type="content" source="./media/create-a-new-resource.png" alt-text="Create a new resource":::
+
+ 2. Search Azure Bot in the list of available resource types.
+
+ :::image type="content" source="./media/search-azure-bot.png" alt-text="Search Azure Bot":::
++
+ 3. Choose Azure Bot to create it.
+
+ :::image type="content" source="./media/create-azure-bot.png" alt-text="Creat Azure Bot":::
+
+ 4. Finally create an Azure Bot resource. You might use an existing Microsoft app ID or use a new one created automatically.
+
+ :::image type="content" source="./media/smaller-provision-azure-bot.png" alt-text="Provision Azure Bot" lightbox="./media/provision-azure-bot.png":::
+
+### Get Bot's MicrosoftAppId and MicrosoftAppPassword
+
+After creating the Azure Bot resource, next step would be to set a password for the App ID we set for the Bot credential if you chose to create one automatically in the first step.
+
+ 1. Go to Azure Active Directory
+
+ :::image type="content" source="./media/azure-ad.png" alt-text="Azure Active Directory":::
+
+2. Find your app in the App Registration blade
+
+ :::image type="content" source="./media/smaller-app-registration.png" alt-text="App Registration" lightbox="./media/app-registration.png":::
+
+3. Create a new password for your app from the `Certificates and Secrets` blade and save the password you create as you won't be able to copy it again.
+
+ :::image type="content" source="./media/smaller-save-password.png" alt-text="Save password" lightbox="./media/save-password.png":::
+
+### Create a Web App where actual bot logic resides
+
+Create a Web App where actual bot logic resides. You could check out some samples at [Bot Builder Samples](https://github.com/Microsoft/BotBuilder-Samples) and tweak them or use Bot Builder SDK to create one: [Bot Builder documentation](https://docs.microsoft.com/composer/introduction). One of the simplest ones to play around with is Echo Bot located here with steps on how to use it and it's the one being used in this example [Echo Bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot). Generally, the Bot Service expects the Bot Application Web App Controller to expose an endpoint `/api/messages`, which handles all the messages reaching the bot. To create the Bot application, follow these steps.
+
+ 1. As in previously shown create a resource and choose `Web App` in search.
+
+ :::image type="content" source="./media/web-app.png" alt-text="Web app":::
++
+ 2. Configure the options you want to set including the region you want to deploy it to.
+
+ :::image type="content" source="./media/web-app-create-options.png" alt-text="Web App Create Options":::
+++
+ 3. Review your options and create the Web App and move to the resource once its been provisioned and copy the hostname URL exposed by the Web App.
+
+ :::image type="content" source="./media/web-app-endpoint.png" alt-text="Web App endpoint":::
++
+### Configure the Azure Bot
+
+Configure the Azure Bot we created with its Web App endpoint where the bot logic is located. To do this, copy the hostname URL of the Web App and append it with `/api/messages`
+
+ :::image type="content" source="./media/smaller-bot-configure-with-endpoint.png" alt-text="Bot Configure with Endpoint" lightbox="./media/bot-configure-with-endpoint.png":::
++
+### Deploy the Azure Bot
+
+The final step would be to deploy the bot logic to the Web App we created. As we mentioned for this tutorial, we'll be using the Echo Bot. This bot only demonstrates a limited set of capabilities, such as echoing the user input. Here's how we deploy it to Azure Web App.
+
+ 1. To use the samples, clone this Github repository using Git.
+ ```
+ git clone https://github.com/Microsoft/BotBuilder-Samples.gitcd BotBuilder-Samples
+ ```
+ 2. Open the project located here [Echo bot](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/02.echo-bot) in Visual Studio.
+
+ 3. Go to the appsettings.json file inside the project and copy the application ID and password we created in step 2 in respective places.
+ ```js
+ {
+ "MicrosoftAppId": "<App-registration-id>",
+ "MicrosoftAppPassword": "<App-password>"
+ }
+ ```
+
+ 4. Click on the project to publish the Web App code to Azure. Choose the publish option in Visual Studio.
+
+ :::image type="content" source="./media/publish-app.png" alt-text="Publish app":::
+
+ 5. Click on New to create a new publishing profile, choose Azure as the target, and Azure App Service as the specific target.
+
+ :::image type="content" source="./media/select-azure-as-target.png" alt-text="Select Azure as Target":::
+
+ :::image type="content" source="./media/select-app-service.png" alt-text="Select App Service":::
+
+ 6. Lastly, the above option opens the deployment config. Choose the Web App we had provisioned from the list of options it comes up with after signing into your Azure account. Once ready click on `Finish` to start the deployment.
+
+ :::image type="content" source="./media/smaller-deployment-config.png" alt-text="Deployment config" lightbox="./media/deployment-config.png":::
+
+## Step 2 - Get an Azure Communication Services Resource
+Now that you got the bot part sorted out, we'll need to get an ACS resource, which we would use for configuring the ACS channel.
+1. Create an Azure Communication Services resource. For details, see [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md). You'll need to **record your resource endpoint and key** for this quickstart.
+2. Create a ACS User and issue a user access token [User Access Token](../../quickstarts/access-tokens.md). Be sure to set the scope to **chat**, and **note the token string as well as the userId string**.
+
+## Step 3 - Enable ACS Chat Channel
+With the ACS resource, we can configure the ACS channel in Azure Bot to bind an ACS User ID with a bot. Note that currently, only the allowlisted Azure account will be able to see Azure Communication Services - Chat channel.
+1. Go to your Bot Services resource on Azure portal. Navigate to `Channels` blade and click on `Azure Communications Services - Chat` channel from the list provided.
+
+ :::image type="content" source="./media/smaller-demoapp-launch-acs-chat.png" alt-text="DemoApp Launch Acs Chat" lightbox="./media/demoapp-launch-acs-chat.png":::
+
+
+2. Provide the resource endpoint and the key belonging to the ACS resource that you want to connect with.
+
+ :::image type="content" source="./media/smaller-demoapp-connect-acsresource.png" alt-text="DemoApp Connect Acs Resource" lightbox="./media/demoapp-connect-acsresource.png":::
++
+3. Once the provided resource details are verified, you'll see the **bot's ACS ID** assigned. With this ID, you can add the bot to the conversation at whenever appropriate using Chat's AddParticipant API. Once the bot is added as participant to a chat, it will start receiving chat related activities and can respond back in the chat thread.
+
+ :::image type="content" source="./media/smaller-demoapp-bot-detail.png" alt-text="DemoApp Bot Detail" lightbox="./media/demoapp-bot-detail.png":::
++
+## Step 4 - Create a chat app and add bot as a participant
+Now that you have the bot's ACS ID, you'll be able to create a chat thread with bot as a participant.
+### Create a new C# application
+
+```console
+dotnet new console -o ChatQuickstart
+```
+
+Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
+
+```console
+cd ChatQuickstart
+dotnet build
+```
+
+### Install the package
+
+Install the Azure Communication Chat SDK for .NET
+
+```PowerShell
+dotnet add package Azure.Communication.Chat
+```
+
+### Create a chat client
+
+To create a chat client, you'll use your Communication Services endpoint and the access token that was generated as part of Step 2. You need to use the `CommunicationIdentityClient` class from the Identity SDK to create a user and issue a token to pass to your chat client.
++
+Copy the following code snippets and paste into source file: **Program.cs**
+```csharp
+using Azure;
+using Azure.Communication;
+using Azure.Communication.Chat;
+using System;
+
+namespace ChatQuickstart
+{
+ class Program
+ {
+ static async System.Threading.Tasks.Task Main(string[] args)
+ {
+ // Your unique Azure Communication service endpoint
+ Uri endpoint = new Uri("https://<RESOURCE_NAME>.communication.azure.com");
+
+ CommunicationTokenCredential communicationTokenCredential = new CommunicationTokenCredential(<Access_Token>);
+ ChatClient chatClient = new ChatClient(endpoint, communicationTokenCredential);
+ }
+ }
+}
+```
+
+### Start a chat thread with the bot
+
+Use the `createChatThread` method on the chatClient to create a chat thread, replace with the bot's ACS ID you obtained.
+```csharp
+var chatParticipant = new ChatParticipant(identifier: new CommunicationUserIdentifier(id: "<BOT_ID>"))
+{
+ DisplayName = "BotDisplayName"
+};
+CreateChatThreadResult createChatThreadResult = await chatClient.CreateChatThreadAsync(topic: "Hello Bot!", participants: new[] { chatParticipant });
+ChatThreadClient chatThreadClient = chatClient.GetChatThreadClient(threadId: createChatThreadResult.ChatThread.Id);
+string threadId = chatThreadClient.Id;
+```
+
+### Get a chat thread client
+The `GetChatThreadClient` method returns a thread client for a thread that already exists.
+
+```csharp
+string threadId = "<THREAD_ID>";
+ChatThreadClient chatThreadClient = chatClient.GetChatThreadClient(threadId: threadId);
+```
+
+### Send a message to a chat thread
+
+Use `SendMessage` to send a message to a thread.
+```csharp
+SendChatMessageOptions sendChatMessageOptions = new SendChatMessageOptions()
+{
+ Content = "Hello World",
+ MessageType = ChatMessageType.Text
+};
+
+SendChatMessageResult sendChatMessageResult = await chatThreadClient.SendMessageAsync(sendChatMessageOptions);
+
+string messageId = sendChatMessageResult.Id;
+```
+
+### Receive chat messages from a chat thread
+
+You can retrieve chat messages by polling the `GetMessages` method on the chat thread client at specified intervals.
+
+```csharp
+AsyncPageable<ChatMessage> allMessages = chatThreadClient.GetMessagesAsync();
+await foreach (ChatMessage message in allMessages)
+{
+ Console.WriteLine($"{message.Id}:{message.Content.Message}");
+}
+```
+You should see bot's echo reply to "Hello World" in the list of messages.
+When creating the actual chat applications, you can also receive real-time chat messages by subscribing to listen for new incoming messages using our JavaScript or mobile SDKs. An example using JavaScript SDK would be:
+```js
+// open notifications channel
+await chatClient.startRealtimeNotifications();
+// subscribe to new notification
+chatClient.on("chatMessageReceived", (e) => {
+ console.log("Notification chatMessageReceived!");
+ // your code here
+});
+```
++
+### Deploy the C# chat application
+If you would like to deploy the chat application, you can follow these steps:
+1. Open the chat project in Visual Studio.
+2. Right click on the ChatQuickstart project and click Publish
+
+ :::image type="content" source="./media/deploy-chat-application.png" alt-text="Deploy Chat Application":::
++
+## More things you can do with bot
+Besides simple text message, bot is also able to receive and send many other activities including
+- Conversation update
+- Message update
+- Message delete
+- Typing indicator
+- Event activity
+
+### Send a welcome message when a new user is added to the thread
+With the current Echo Bot logic, it accepts input from the user and echoes it back. If you would like to add additional logic such as responding to a participant added ACS event, copy the following code snippets and paste into the source file: [EchoBot.cs](https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/csharp_dotnetcore/02.echo-bot/Bots/EchoBot.cs)
+
+```csharp
+using System.Threading;
+using System.Threading.Tasks;
+using Microsoft.Bot.Builder;
+using Microsoft.Bot.Schema;
+
+namespace Microsoft.BotBuilderSamples.Bots
+{
+ public class EchoBot : ActivityHandler
+ {
+ public override async Task OnTurnAsync(ITurnContext turnContext, CancellationToken cancellationToken)
+ {
+ if (turnContext.Activity.Type == ActivityTypes.Message)
+ {
+ var replyText = $"Echo: {turnContext.Activity.Text}";
+ await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);
+ }
+ else if (ActivityTypes.ConversationUpdate.Equals(turnContext.Activity.Type))
+ {
+ if (turnContext.Activity.MembersAdded != null)
+ {
+ foreach (var member in turnContext.Activity.MembersAdded)
+ {
+ if (member.Id != turnContext.Activity.Recipient.Id)
+ {
+ await turnContext.SendActivityAsync(MessageFactory.Text("Hello and welcome to chat with EchoBot!"), cancellationToken);
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+### Send an adaptive card
+
+To help you increase engagement and efficiency and communicate with users in a variety of ways, you can send adaptive cards to the chat thread. You can send adaptive cards from a bot by adding them as bot activity attachments.
++
+```csharp
+var reply = Activity.CreateMessageActivity();
+var adaptiveCard = new Attachment()
+{
+ ContentType = "application/vnd.microsoft.card.adaptive",
+ Content = {/* the adaptive card */}
+};
+reply.Attachments.Add(adaptiveCard);
+await turnContext.SendActivityAsync(reply, cancellationToken);
+```
+You can find sample payloads for adaptive cards at [Samples and Templates](https://adaptivecards.io/samples)
+
+And on the ACS User side, the ACS message's metadata field will indicate this is a message with attachment.The key is microsoft.azure.communication.chat.bot.contenttype, which is set to the value azurebotservice.adaptivecard. This is an example of the chat message that will be received:
+
+```json
+{
+ "content": "{\"attachments\":[{\"contentType\":\"application/vnd.microsoft.card.adaptive\",\"content\":{/* the adaptive card */}}]}",
+ "senderDisplayName": "BotDisplayName",
+ "metadata": {
+ "microsoft.azure.communication.chat.bot.contenttype": "azurebotservice.adaptivecard"
+ },
+ "messageType": "Text"
+}
+```
+
+## Next steps
+
+Try the [Sample App](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App), which showcases a 1:1 chat between the end user and chat bot, and uses BotFramework's WebChat UI component.
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/rooms/get-started-rooms.md
Last updated 11/19/2021
+zone_pivot_groups: acs-csharp-java
# Quickstart: Create and manage a room resource
This quickstart will help you get started with Azure Communication Services Rooms. A `room` is a server-managed communications space for a known, fixed set of participants to collaborate for a pre-determined duration. The [rooms conceptual documentation](../../concepts/rooms/room-concept.md) covers more details and potential use cases for `rooms`.
-## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-
-## Setting up
-
-### Create a new C# application
-
-In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `RoomsQuickstart`. This command creates a simple "Hello World" C# project with a single source file: **Program.cs**.
-
-```console
-dotnet new console -o RoomsQuickstart
-```
-
-Change your directory to the newly created app folder and use the `dotnet build` command to compile your application.
-
-```console
-cd RoomsQuickstart
-dotnet build
-```
-
-### Initialize a room client
-
-Create a new `RoomsClient` object that will be used to create new `rooms` and manage their properties and lifecycle. The connection string of your `Communications Service` will be used to authenticate the request. For more information on connection strings, see [this page](../create-communication-resource.md#access-your-connection-strings-and-service-endpoints).
-
-```csharp
-// Find your Communication Services resource in the Azure portal
-var connectionString = "<connection_string>";
-RoomsClient roomsClient = new RoomsClient(connectionString);
-```
-
-### Create a room
-
-Create a new `room` with default properties using the code snippet below:
-
-```csharp
-CreateRoomRequest createRoomRequest = new CreateRoomRequest();
-Response<CommunicationRoom> createRoomResponse = await roomsClient.CreateRoomAsync(createRoomRequest);
-CommunicationRoom createCommunicationRoom = createRoomResponse.Value;
-string roomId = createCommunicationRoom.Id;
-```
-
-Since `rooms` are server-side entities, you may want to keep track of and persist the `roomId` in the storage medium of choice. You can reference the `roomId` to view or update the properties of a `room` object.
-
-### Get properties of an existing room
-
-Retrieve the details of an existing `room` by referencing the `roomId`:
-
-```csharp
-Response<CommunicationRoom> getRoomResponse = await roomsClient.GetRoomAsync(roomId)
-CommunicationRoom getCommunicationRoom = getRoomResponse.Value;
-```
-
-### Update the lifetime of a room
-
-The lifetime of a `room` can be modified by issuing an update request for the `ValidFrom` and `ValidUntil` parameters.
-
-```csharp
-var validFrom = new DateTime(2022, 05, 01, 00, 00, 00, DateTimeKind.Utc);
-var validUntil = validFrom.AddDays(1);
-
-UpdateRoomRequest updateRoomRequest = new UpdateRoomRequest();
-updateRoomRequest.ValidFrom = validFrom;
-updateRoomRequest.ValidUntil = validUntil;
-
-Response<CommunicationRoom> updateRoomResponse = await roomsClient.UpdateRoomAsync(roomId, updateRoomRequest);
-CommunicationRoom updateCommunicationRoom = updateRoomResponse.Value;
-```
-
-### Add new participants
-
-To add new participants to a `room`, issue an update request on the room's `Participants`:
-
-```csharp
-var communicationUser1 = "<CommunicationUserId1>";
-var communicationUser2 = "<CommunicationUserId2>";
-var communicationUser3 = "<CommunicationUserId3>";
-
-UpdateRoomRequest updateRoomRequest = new UpdateRoomRequest()
-updateRoomRequest.Participants.Add(communicationUser1, new RoomParticipant());
-updateRoomRequest.Participants.Add(communicationUser2, new RoomParticipant());
-updateRoomRequest.Participants.Add(communicationUser3, new RoomParticipant());
-
-Response<CommunicationRoom> updateRoomResponse = await roomsClient.UpdateRoomAsync(roomId, updateRoomRequest);
-CommunicationRoom updateCommunicationRoom = updateRoomResponse.Value;
-```
-
-Participants that have been added to a `room` become eligible to join calls.
-
-### Remove participants
-
-To remove a participant from a `room` and revoke their access, update the `Participants` list:
-
-```csharp
-var communicationUser1 = "<CommunicationUserId1>";
-var communicationUser2 = "<CommunicationUserId2>";
-var communicationUser3 = "<CommunicationUserId3>";
-
-UpdateRoomRequest updateRoomRequest = new UpdateRoomRequest()
-updateRoomRequest.Participants.Add(communicationUser1, null);
-updateRoomRequest.Participants.Add(communicationUser2, null);
-updateRoomRequest.Participants.Add(communicationUser3, null);
-
-Response<CommunicationRoom> updateRoomResponse = await roomsClient.UpdateRoomAsync(roomId, updateRoomRequest);
-CommunicationRoom updateCommunicationRoom = updateRoomResponse.Value;
-```
-
-### Join a room call
-
-To join a room call, set up your web application using the [Add voice calling to your client app](../voice-video-calling/getting-started-with-calling.md) guide. Once you have an initialized and authenticated `callAgent`, you may specify a context object with the `roomId` property as the `room` identifier. To join the call, use the `join` method and pass the context instance.
-
-```js
-
-const context = { roomId: '<RoomId>' }
-
-const call = callAgent.join(context);
-
-```
-
-### Delete room
-If you wish to disband an existing `room`, you may issue an explicit delete request. All `rooms` and their associated resources are automatically deleted at the end of their validity plus a grace period.
-
-```csharp
-Response deleteRoomResponse = await roomsClient.DeleteRoomAsync(roomId)
-```
## Object model
cosmos-db Create Graph Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-console.md
This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](grap
:::image type="content" source="./media/create-graph-console/gremlin-console.png" alt-text="Azure Cosmos DB from the Apache Gremlin console":::
-The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/downloads.html).
+The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/download.html).
## Prerequisites
You need to have an Azure subscription to create an Azure Cosmos DB account for
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
-You also need to install the [Gremlin Console](https://tinkerpop.apache.org/downloads.html). The **recommended version is v3.4.3** or earlier. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
+You also need to install the [Gremlin Console](https://tinkerpop.apache.org/download.html). The **recommended version is v3.4.3** or earlier. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html), minimum requires Java 8 but it is preferable to use Java 11).
## Create a database account
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/gremlin-support.md
The following table shows popular Gremlin drivers that you can use against Azure
| [Python](https://tinkerpop.apache.org/docs/3.3.1/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.2.7 | | [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](create-graph-php.md) | 3.1.0 | | [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
-| [Gremlin console](https://tinkerpop.apache.org/downloads.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.2.0 + |
+| [Gremlin console](https://tinkerpop.apache.org/download.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.2.0 + |
## Supported Graph Objects
cosmos-db How To Always Encrypted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-always-encrypted.md
Title: Use client-side encryption with Always Encrypted for Azure Cosmos DB
description: Learn how to use client-side encryption with Always Encrypted for Azure Cosmos DB Previously updated : 05/25/2021 Last updated : 01/26/2022
The first step to get started with Always Encrypted is to create your CMKs in Az
1. Create a new key in the **Keys** section. 1. Once the key is created, browse to its current version, and copy its full key identifier:<br>`https://<my-key-vault>.vault.azure.net/keys/<key>/<version>`. If you omit the key version at the end of the key identifier, the latest version of the key is used.
-Next, you need to configure how the Azure Cosmos DB SDK will access your Azure Key Vault instance. This authentication is done through an Azure Active Directory (AD) identity. Most likely, you'll use the identity of an Azure AD application or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) as the proxy between your client code and your Azure Key Vault instance, although any kind of identity could be used. Use the following steps to use an Azure AD application as the proxy:
+Next, you need to configure how the Azure Cosmos DB SDK will access your Azure Key Vault instance. This authentication is done through an Azure Active Directory (AD) identity. Most likely, you'll use the identity of an Azure AD application or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) as the proxy between your client code and your Azure Key Vault instance, although any kind of identity could be used. Use the following steps to use your Azure AD identity as the proxy:
-1. Create a new application and add a client secret as described in [this quickstart](../active-directory/develop/quickstart-register-app.md).
-
-1. Go back to your Azure Key Vault instance, browse to the **Access policies** section, and add a new policy:
+1. From your Azure Key Vault instance, browse to the **Access policies** section, and add a new policy:
1. In **Key permissions**, select **Get**, **List**, **Unwrap Key**, **Wrap Key**, **Verify** and **Sign**.
- 1. In **Select principal**, search for the AAD application you've created before.
+ 1. In **Select principal**, search for your Azure AD identity.
### Protect your CMK from accidental deletion
If you're using an existing Azure Key Vault instance, you can verify that these
> - In **.NET** with the [Microsoft.Azure.Cosmos.Encryption package](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Encryption). > - In **Java** with the [azure.cosmos.encryption package](https://mvnrepository.com/artifact/com.azure/azure-cosmos-encryption).
-To use Always Encrypted, an instance of an `EncryptionKeyStoreProvider` must be attached to your Azure Cosmos DB SDK instance. This object is used to interact with the key store hosting your CMKs. The default key store provider for Azure Key Vault is named `AzureKeyVaultKeyStoreProvider`.
+To use Always Encrypted, an instance of an `EncryptionKeyWrapProvider` must be attached to your Azure Cosmos DB SDK instance. This object is used to interact with the key store hosting your CMKs. The default key store provider for Azure Key Vault is named `AzureKeyVaultKeyWrapProvider`.
-The following snippets show how to use the identity of an Azure AD application with a client secret. You can find examples of creating different kinds of `TokenCredential` classes:
+The following snippets use the `DefaultAzureCredential` class to retrieve the Azure AD identity to use when accessing your Azure Key Vault instance. You can find examples of creating different kinds of `TokenCredential` classes:
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes)
The following snippets show how to use the identity of an Azure AD application w
# [.NET](#tab/dotnet) > [!NOTE]
-> In .NET, you will need the additional [Microsoft.Data.Encryption.AzureKeyVaultProvider package](https://www.nuget.org/packages/Microsoft.Data.Encryption.AzureKeyVaultProvider) to access the `AzureKeyVaultKeyStoreProvider` class.
+> You will need the additional [Azure.Identity package](https://www.nuget.org/packages/Azure.Identity/) to access the `TokenCredential` classes.
```csharp
-var tokenCredential = new ClientSecretCredential(
- "<aad-app-tenant-id>", "<aad-app-client-id>", "<aad-app-secret>");
-var keyStoreProvider = new AzureKeyVaultKeyStoreProvider(tokenCredential);
+var tokenCredential = new DefaultAzureCredential();
+var keyWrapProvider = new AzureKeyVaultKeyWrapProvider(tokenCredential);
var client = new CosmosClient("<connection-string>") .WithEncryption(keyStoreProvider); ```
var client = new CosmosClient("<connection-string>")
# [Java](#tab/java) ```java
-TokenCredential tokenCredential = new ClientSecretCredentialBuilder()
- .authorityHost("https://login.microsoftonline.com")
- .tenantId("<aad-app-tenant-id>")
- .clientId("<aad-app-client-id>")
- .clientSecret("<aad-app-secret>")
+TokenCredential tokenCredential = new DefaultAzureCredentialBuilder()
.build(); AzureKeyVaultKeyStoreProvider encryptionKeyStoreProvider = new AzureKeyVaultKeyStoreProvider(tokenCredential);
Before data can be encrypted in a container, a [data encryption key](#data-encry
var database = client.GetDatabase("my-database"); await database.CreateClientEncryptionKeyAsync( "my-key",
- DataEncryptionKeyAlgorithm.AEAD_AES_256_CBC_HMAC_SHA256,
+ DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256,
new EncryptionKeyWrapMetadata(
- keyStoreProvider.ProviderName,
+ keyWrapProvider.ProviderName,
"akvKey", "https://<my-key-vault>.vault.azure.net/keys/<key>/<version>")); ```
var path1 = new ClientEncryptionIncludedPath
Path = "/property1", ClientEncryptionKeyId = "my-key", EncryptionType = EncryptionType.Deterministic.ToString(),
- EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AEAD_AES_256_CBC_HMAC_SHA256.ToString()
+ EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256
}; var path2 = new ClientEncryptionIncludedPath { Path = "/property2", ClientEncryptionKeyId = "my-key", EncryptionType = EncryptionType.Randomized.ToString(),
- EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AEAD_AES_256_CBC_HMAC_SHA256.ToString()
+ EncryptionAlgorithm = DataEncryptionKeyAlgorithm.AeadAes256CbcHmacSha256
}; await database.DefineContainer("my-container", "/partition-key") .WithClientEncryptionPolicy()
You may want to "rotate" your CMK (that is, use a new CMK instead of the current
await database.RewrapClientEncryptionKeyAsync( "my-key", new EncryptionKeyWrapMetadata(
- keyStoreProvider.ProviderName,
+ keyWrapProvider.ProviderName,
"akvKey", " https://<my-key-vault>.vault.azure.net/keys/<new-key>/<version>")); ```
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
Title: Azure Synapse Link for Azure Cosmos DB, benefits, and when to use it
-description: Learn about Azure Synapse Link for Azure Cosmos DB. Synapse Link lets you run near real-time analytics (HTAP) using Azure Synapse Analytics over operational data in Azure Cosmos DB.
+description: Learn about Azure Synapse Link for Azure Cosmos DB. Synapse Link lets you run near real time analytics (HTAP) using Azure Synapse Analytics over operational data in Azure Cosmos DB.
# What is Azure Synapse Link for Azure Cosmos DB? [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables you to run near real-time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables near real time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
-Using [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, Azure Synapse Link enables no Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against your operational data at scale. Business analysts, data engineers and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real-time business intelligence, analytics, and machine learning pipelines. You can achieve this without impacting the performance of your transactional workloads on Azure Cosmos DB.
+Using [Azure Cosmos DB analytical store](analytical-store-introduction.md), a fully isolated column store, Azure Synapse Link enables no Extract-Transform-Load (ETL) analytics in [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) against your operational data at scale. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. You can achieve this without impacting the performance of your transactional workloads on Azure Cosmos DB.
The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics:
When compared to the traditional ETL-based solutions, Azure Synapse Link for Azu
### Reduced complexity with No ETL jobs to manage
-Azure Synapse Link allows you to directly access Azure Cosmos DB analytical store using Azure Synapse Analytics without complex data movement. Any updates made to the operational data are visible in the analytical store in near real-time with no ETL or change feed jobs. You can run large scale analytics against analytical store, from Azure Synapse Analytics, without additional data transformation.
+Azure Synapse Link allows you to directly access Azure Cosmos DB analytical store using Azure Synapse Analytics without complex data movement. Any updates made to the operational data are visible in the analytical store in near real-time with no ETL or change feed jobs. You can run large-scale analytics against analytical store, from Azure Synapse Analytics, without additional data transformation.
### Near real-time insights into your operational data
-You can now get rich insights on your operational data in near real-time, using Azure Synapse Link. ETL-based systems tend to have higher latency for analyzing your operational data, due to many layers needed to extract, transform and load the operational data. With native integration of Azure Cosmos DB analytical store with Azure Synapse Analytics, you can analyze operational data in near real-time enabling new business scenarios.
+You can now get rich insights on your operational data in near real-time, using Azure Synapse Link. ETL-based systems tend to have higher latency for analyzing your operational data, due to many layers needed to extract, transform, and load the operational data. With native integration of Azure Cosmos DB analytical store with Azure Synapse Analytics, you can analyze operational data in near real-time enabling new business scenarios.
### No impact on operational workloads
-With Azure Synapse Link, you can run analytical queries against an Azure Cosmos DB analytical store (a separate column store) while the transactional operations are processed using provisioned throughput for the transactional workload (a row-based transactional store). The analytical workload is served independent of the transactional workload traffic without consuming any of the throughput provisioned for your operational data.
+With Azure Synapse Link, you can run analytical queries against an Azure Cosmos DB analytical store, a column store representation of you data, while the transactional operations are processed using provisioned throughput for the transactional workload, over the Cosmos DB row-based transactional store. The analytical workload is independent of the transactional workload traffic, not consuming any of the provisioned throughput of your operational data.
### Optimized for large-scale analytics workloads Azure Cosmos DB analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads. With built-in support into Azure Synapse Analytics, accessing this storage layer provides simplicity and high performance.
-### Cost effective
+### Cost-effective
-With Azure Synapse Link, you can get a cost-optimized, fully managed solution for operational analytics. It eliminates the extra layers of storage and compute required in traditional ETL pipelines for analyzing operational data.
+With Azure Synapse Link, you can get a cost-optimized, fully managed solution for operational analytics. It eliminates extra storage and compute layers required in traditional ETL pipelines for analyzing operational data.
-Azure Cosmos DB analytical store follows a consumption-based pricing model, which is based on data storage and analytical read/write operations and queries executed . It doesnΓÇÖt require you to provision any throughput, as you do today for the transactional workloads. Accessing your data with highly elastic compute engines from Azure Synapse Analytics makes the overall cost of running storage and compute very efficient.
+Azure Cosmos DB analytical store follows a consumption-based pricing model, which is based on data storage and analytical read/write operations and queries executed. It doesnΓÇÖt require you to provision any throughput, as you do today for the transactional workloads. Accessing your data with highly elastic compute engines from Azure Synapse Analytics makes the overall cost of running storage and compute very efficient.
### Analytics for locally available, globally distributed, multi-region writes
You can run analytical queries effectively against the nearest regional copy of
## Enable HTAP scenarios for your operational data
-Synapse Link brings together Azure Cosmos DB analytical store with Azure Synapse analytics runtime support. This integration enables you to build cloud native HTAP (Hybrid transactional/analytical processing) solutions that generate insights based on real-time updates to your operational data over large datasets. It unlocks new business scenarios to raise alerts based on live trends, build near real-time dashboards, and business experiences based on user behavior.
+Synapse Link brings together Azure Cosmos DB analytical store with Azure Synapse Analytics runtime support. This integration enables you to build cloud native HTAP (Hybrid transactional/analytical processing) solutions that generate insights based on real-time updates to your operational data over large datasets. It unlocks new business scenarios to raise alerts based on live trends, build near real-time dashboards, and business experiences based on user behavior.
### Azure Cosmos DB analytical store
-Azure Cosmos DB analytical store is a column-oriented representation of your operational data in Azure Cosmos DB. This analytical store is suitable for fast, cost effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads.
+Azure Cosmos DB analytical store is a column-oriented representation of your operational data in Azure Cosmos DB. This analytical store is suitable for fast, cost-effective queries on large operational data sets, without copying data and impacting the performance of your transactional workloads.
-Analytical store automatically picks up high frequency inserts, updates, deletes in your transactional workloads in near real time, as a fully managed capability (ΓÇ£auto-syncΓÇ¥) of Azure Cosmos DB. No change feed or ETL is required.
+Analytical store automatically picks up high frequency inserts, updates, deletes in your transactional workloads in near real-time, as a fully managed capability (ΓÇ£auto-syncΓÇ¥) of Azure Cosmos DB. No change feed or ETL is required.
If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions for that account. For more information on the analytical store, see [Azure Cosmos DB Analytical store overview](analytical-store-introduction.md) article.
You can query the data from Azure Cosmos DB analytical store simultaneously, wit
This integration enables the following HTAP scenarios for different users:
-* A BI engineer who wants to model and publish a Power BI report to access the live operational data in Azure Cosmos DB directly through Synapse SQL.
+* A BI Engineer, who wants to model and publish a Power BI report to access the live operational data in Azure Cosmos DB directly through Synapse SQL.
-* A data analyst who wants to derive insights from the operational data in an Azure Cosmos DB container by querying it with Synapse SQL, read the data at scale and combine those findings with other data sources.
+* A Data Analyst, who wants to derive insights from the operational data in an Azure Cosmos DB container by querying it with Synapse SQL, read the data at scale and combine those findings with other data sources.
-* A data scientist who wants to use Synapse Spark to find a feature to improve their model and train that model without doing complex data engineering. They can also write the results of the model post inference into Azure Cosmos DB for real-time scoring on the data through Spark Synapse.
+* A Data Scientist, who wants to use Synapse Spark to find a feature to improve their model and train that model without doing complex data engineering. They can also write the results of the model post inference into Azure Cosmos DB for real-time scoring on the data through Spark Synapse.
-* A data engineer who wants to make data accessible for consumers, by creating SQL or Spark tables over Azure Cosmos DB containers without manual ETL processes.
+* A Data Engineer, who wants to make data accessible for consumers, by creating SQL or Spark tables over Azure Cosmos DB containers, without manual ETL processes.
For more information on Azure Synapse Analytics runtime support for Azure Cosmos DB, see [Azure Synapse Analytics for Cosmos DB support](../synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md).
For more information on Azure Synapse Analytics runtime support for Azure Cosmos
Synapse Link is recommended in the following cases:
-* If you are an Azure Cosmos DB customer and you want to run analytics, BI, and machine learning over your operational data. In such cases, Synapse Link provides a more integrated analytics experience without impacting your transactional storeΓÇÖs provisioned throughput. For example:
+* If you're an Azure Cosmos DB customer and you want to run analytics, BI, and machine learning over your operational data. In such cases, Synapse Link provides a more integrated analytics experience without impacting your transactional storeΓÇÖs provisioned throughput. For example:
- * If you are running analytics or BI on your Azure Cosmos DB operational data directly using separate connectors today, or
+ * If you're running analytics or BI on your Azure Cosmos DB operational data directly using separate connectors today, or
- * If you are running ETL processes to extract operational data into a separate analytics system.
+ * If you're running ETL processes to extract operational data into a separate analytics system.
In such cases, Synapse Link provides a more integrated analytics experience without impacting your transactional storeΓÇÖs provisioned throughput.
-Synapse Link is not recommended if you are looking for traditional data warehouse requirements such as high concurrency, workload management, and persistence of aggregates across multiple data sources. For more information, see [common scenarios that can be powered with Azure Synapse Link for Azure Cosmos DB](synapse-link-use-cases.md).
+Synapse Link isn't recommended if you're looking for traditional data warehouse requirements such as high concurrency, workload management, and persistence of aggregates across multiple data sources. For more information, see [common scenarios that can be powered with Azure Synapse Link for Azure Cosmos DB](synapse-link-use-cases.md).
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It is not supported for Gremlin API, Cassandra API, and Table API.
+* Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It isn't supported for Gremlin API, Cassandra API, and Table API.
-* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently is not supported.
+* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
-* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API acconts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
+* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
-* Backup and restore of your data in analytical store is not supported at this time. You can recreate your analytical store data in some scenarios as below:
- * Azure Synapse Link and periodic backup mode can coexist in the same database account. In this mode, your transactional store data will be automatically backed up. However, analytical store data is not included in backups and restores. If you use `transactional TTL` equal or bigger than your `analytical TTL` on your container, you can
+* Backup and restore of your data in analytical store isn't supported at this time. You can recreate your analytical store data in some scenarios as below:
+ * Azure Synapse Link and periodic backup mode can coexist in the same database account. In this mode, your transactional store data will be automatically backed up. However, analytical store data isn't included in backups and restores. If you use `transactional TTL` equal or bigger than your `analytical TTL` on your container, you can
fully recreate your analytical store data by enabling analytical store on the restored container. Please note, at present, you can only recreate analytical store on your restored containers for SQL API.
- * Synapse Link and continuous backup mode (point=in-time restore) coexistence in the same database account is not supported. If you enable continuous backup mode, you can't
+ * Synapse Link and continuous backup mode (point=in-time restore) coexistence in the same database account isn't supported. If you enable continuous backup mode, you can't
turn on Synapse Link, and vice versa.
-* RBAC is not supported when querying using Synapse SQL serverless pools.
+* Role-Based Access (RBAC) isn't supported when querying using Synapse SQL serverless pools.
## Security
To learn more, see the following docs:
* [Azure Cosmos DB analytical store overview](analytical-store-introduction.md)
-* Checkout the learn module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
+* Check out the learn module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/learn/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/)
* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md)
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-work-scopes.md
Cost Management Contributor is the recommended least-privilege role. The role al
- **Schedule cost data export** ΓÇô Cost Management Contributors also need access to manage storage accounts to schedule an export to copy data into a storage account. Consider granting [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) to a resource group that contains the storage account where cost data is exported. - **Viewing cost-saving recommendations** ΓÇô Cost Management Readers and Cost Management Contributors have access to *view* cost recommendations by default. However, access to act on the cost recommendations requires access to individual resources. Consider granting a [service-specific role](../../role-based-access-control/built-in-roles.md#all) if you want to act on a cost-based recommendation.
-Management groups are only supported if they contain up to 3,000 Enterprise Agreement (EA), Pay-as-you-go (PAYG), or Microsoft internal subscriptions. Management groups with more than 3,000 subscriptions or subscriptions with other offer types, like Microsoft Customer Agreement or Azure Active Directory subscriptions, can't view costs. If you have a mix of subscriptions, move the unsupported subscriptions to a separate arm of the management group hierarchy to enable Cost Management for the supported subscriptions. As an example, create two management groups under the root management group: **Azure AD** and **My Org**. Move your Azure AD subscription to the **Azure AD** management group and then view and manage costs using the **My Org** management group.
+> [!NOTE]
+> Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement subscriptions.
+
+Management groups are only supported if they contain up to 3,000 Enterprise Agreement (EA), Pay-as-you-go (PAYG), or Microsoft internal subscriptions. Management groups with more than 3,000 subscriptions or subscriptions with other offer types, like Microsoft Customer Agreement or Azure Active Directory subscriptions, can't view costs.
+
+If you have a mix of subscriptions, move the unsupported subscriptions to a separate arm of the management group hierarchy to enable Cost Management for the supported subscriptions. As an example, create two management groups under the root management group: **Azure AD** and **My Org**. Move your Azure AD subscription to the **Azure AD** management group and then view and manage costs using the **My Org** management group.
### Feature behavior for each role
Customer Agreement billing scopes support the following roles:
Azure subscriptions are nested under invoice sections, like how they are under EA enrollment accounts. Billing users have access to cost data for the subscriptions and resource groups that are under their respective scopes. However, they don't have access to see or manage resources in the Azure portal. Billing users can view costs by navigating to **Cost Management + Billing** in the Azure portal list of services. Then, filter costs to the specific subscriptions and resource groups they need to report on.
+> [!NOTE]
+> Management group scopes aren't supported for Microsoft Customer Agreement accounts at this time.
+ Billing users don't have access to management groups because they don't explicitly fall under the billing account. However, when management groups are enabled for the organization, all subscription costs are rolled-up to the billing account and to the root management group because they're both constrained to a single directory. Management groups only include purchases that are usage-based. Purchases like reservations and third-party Marketplace offerings aren't included in management groups. So, the billing account and root management group may report different totals. To view these costs, use the billing account or respective billing profile. ### Feature behavior for each role
data-factory Copy Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-log.md
Title: Session log in copy activity
-description: 'Learn about how to enable session log in copy activity in Azure Data Factory.'
+ Title: Session log in a Copy activity
+description: Learn how to enable session log in a Copy activity in Azure Data Factory.
Previously updated : 11/11/2020 Last updated : 01/26/2022
-# Session log in copy activity
+# Session log in a Copy activity
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-You can log your copied file names in copy activity, which can help you to further ensure the data is not only successfully copied from source to destination store, but also consistent between source and destination store by reviewing the copied files in copy activity session logs.
+You can log your copied file names in a Copy activity. This can help ensure data not only copies successfully from source to destination, but also validate consistency between source and destination.
-When you enable fault tolerance setting in copy activity to skip faulty data, the skipped files and skipped rows can also be logged. You can get more details from [fault tolerance in copy activity](copy-activity-fault-tolerance.md).
+When you enable fault tolerance setting in a Copy activity to skip faulty data, the skipped files and skipped rows can also be logged. You can get more details from [fault tolerance in copy activity](copy-activity-fault-tolerance.md).
-Given you have the opportunity to get all the file names copied by ADF copy activity via enabling session log, it will be helpful for you in the following scenarios:
-- After you use ADF copy activities to copy the files from one storage to another, you see some files are shown up in destination store which should not. You can scan the copy activity session logs to see which copy activity actually copied those files and when to copy those files. By those, you can easily find the root cause and fix your configurations in ADF. -- After you use ADF copy activities to copy the files from one storage to another, you feel the files copied to the destination are not the same as the ones from the source store. You can scan the copy activity session logs to get the timestamp of copy jobs as well as the metadata of files when ADF copy activities read them from the source store. By those, you can know if those files had been updated by other applications on source store after being copied by ADF.
+Given you have the opportunity to get all the file names copied by Azure Data Factory (ADF) Copy activity via enabling session log, it will be helpful for you in the following scenarios:
+- After you use ADF Copy activities to copy the files from one storage to another, you find some unexpected files in the destination store. You can scan the Copy activity session logs to see which activity actually copied the files, and when. With this approach, you can easily find the root cause and fix your configurations in ADF.
+- After you use ADF Copy activities to copy the files from one storage to another, you find the files copied to the destination arenΓÇÖt ones expected from the source store. You can scan the Copy activity session logs to get the timestamp of copy jobs as well as the metadata of files when ADF Copy activities read them from the source store. With this approach, you can confirm if the files were updated by other applications on the source store after being copied by ADF.
+# [Azure Data Factory](#tab/data-factory)
-## Configuration
+## Configuration with the Azure Data Factory Studio
+To configure Copy activity logging, first add a Copy activity to your pipeline, and then use its Settings tab to configure logging and various logging options.
+
+To subsequently monitor the log, you can check the output of a pipeline run on the Monitoring tab of the ADF Studio under pipeline runs. There, select the pipeline run you want to monitor and then hover over the area beside the Activity name, where youΓÇÖll find icons to links showing the pipeline input, output (once it is complete), and other details.
++
+Select the output icon :::image type="icon" source="media/copy-activity-log/output-icon.png" border="false"::: to see details of the logging for the job, and note the logging location in the selected storage account, where you can see details of all logged activities.
++
+See below for details of the log output format.
+
+# [Synapse Analytics](#tab/synapse-analytics)
+
+## Configuration with Synapse Studio
+To configure Copy activity logging, first add a Copy activity to your pipeline, and then use its Settings tab to configure logging and various logging options.
+
+To monitor the log, you can check the output of a pipeline run on the Monitoring tab of ADF Studio, under pipeline runs. Select the run you want to monitor and then hover over the area beside the Activity name. Icons will appear with links showing the pipeline input, output (once itΓÇÖs complete), and other details.
++
+Select the output icon :::image type="icon" source="media/copy-activity-log/output-icon.png" border="false"::: to see details of the logging for the job, and note the logging location in the selected storage account, where you can see details of all logged activities.
++
+See below for details of the log output format.
++
+
+## Configuration with JSON
The following example provides a JSON definition to enable session log in Copy Activity: ```json
The following example provides a JSON definition to enable session log in Copy A
Property | Description | Allowed values | Required -- | -- | -- | --
-enableCopyActivityLog | When set it to true, you will have the opportunity to log copied files, skipped files or skipped rows. | True<br/>False (default) | No
+enableCopyActivityLog | When set it to true, youΓÇÖll have the opportunity to log copied files, skipped files or skipped rows. | True<br/>False (default) | No
logLevel | "Info" will log all the copied files, skipped files and skipped rows. "Warning" will log skipped files and skipped rows only. | Info<br/>Warning (default) | No
-enableReliableLogging | When it is true, copy activity in reliable mode will flush logs immediately once each file is copied to the destination. When you are copying huge amounts of files with reliable logging mode enabled in copy activity, you should expect the copy throughput would be impacted, since double write operations are required for each file copying. One request is to the destination store and another request is to the log storage store. Copy activity in best effort mode will flush logs with batch of records within a period of time, where the copy throughput will be much less impacted. The completeness and timeliness of logging is not guaranteed in this mode since there are a few possibilities that the last batch of log events has not been flushed to the log file when copy activity failed. At this moment, you will see a few files copied to the destination are not logged. | True<br/>False (default) | No
+enableReliableLogging | When itΓÇÖs true, a Copy activity in reliable mode will flush logs immediately once each file is copied to the destination. When copying many files with reliable logging mode enabled in the Copy activity, you should expect the throughput would be impacted, since double write operations are required for each file copied. One request goes to the destination store and another to the log storage store. A Copy activity in best effort mode will flush logs with batch of records within a period of time, and the copy throughput will be much less impacted. The completeness and timeliness of logging isnΓÇÖt guaranteed in this mode since there are a few possibilities that the last batch of log events hasnΓÇÖt been flushed to the log file when a Copy activity failed. In this scenario, youΓÇÖll see a few files copied to the destination arenΓÇÖt logged. | True<br/>False (default) | No
logLocationSettings | A group of properties that can be used to specify the location to store the session logs. | | No linkedServiceName | The linked service of [Azure Blob Storage](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#linked-service-properties) to store the session log files. | The names of an `AzureBlobStorage` or `AzureBlobFS` types linked service, which refers to the instance that you use to store the log files. | No
-path | The path of the log files. | Specify the path that you want to store the log files. If you do not provide a path, the service creates a container for you. | No
+path | The path of the log files. | Specify the path that you want to store the log files. If you donΓÇÖt provide a path, the service creates a container for you. | No
## Monitoring
-### Output from copy activity
-After the copy activity runs completely, you can see the path of log files from the output of each copy activity run. You can find the log files from the path: `https://[your-blob-account].blob.core.windows.net/[logFilePath]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].txt`. The log files generated have the .txt extension and their data is in CSV format.
+### Output from a Copy activity
+After the copy activity runs completely, you can see the path of log files from the output of each Copy activity run. You can find the log files from the path: `https://[your-blob-account].blob.core.windows.net/[logFilePath]/copyactivity-logs/[copy-activity-name]/[copy-activity-run-id]/[auto-generated-GUID].txt`. The log files generated have the .txt extension and their data is in CSV format.
```json "output": {
After the copy activity runs completely, you can see the path of log files from
### The schema of the log file
-The following is the schema of a log file.
+The following table shows the schema of a log file.
Column | Description -- | -- Timestamp | The timestamp when ADF reads, writes, or skips the object. Level | The log level of this item. It can be 'Warning' or "Info".
-OperationName | ADF copy activity operational behavior on each object. It can be 'FileRead',' FileWrite', 'FileSkip', or 'TabularRowSkip'.
+OperationName | ADF Copy activity operational behavior on each object. It can be 'FileRead',' FileWrite', 'FileSkip', or 'TabularRowSkip'.
OperationItem | The file names or skipped rows. Message | More information to show if the file has been read from source store, or written to the destination store. It can also be why the file or rows has being skipped.
-The following is an example of a log file.
+HereΓÇÖs an example of a log file:
``` Timestamp, Level, OperationName, OperationItem, Message 2020-10-19 08:39:13.6688152,Info,FileRead,"sample1.csv","Start to read file: {""Path"":""sample1.csv"",""ItemType"":""File"",""Size"":104857620,""LastModified"":""2020-10-19T08:22:31Z"",""ETag"":""\""0x8D874081F80C01A\"""",""ContentMD5"":""dGKVP8BVIy6AoTtKnt+aYQ=="",""ObjectName"":null}"
Timestamp, Level, OperationName, OperationItem, Message
2020-10-19 08:45:17.6508407,Info,FileRead,"sample2.csv","Complete reading file successfully. " 2020-10-19 08:45:28.7390083,Info,FileWrite,"sample2.csv","Complete writing file from source file: sample2.csv. File is successfully copied." ```
-From the log file above, you can see sample1.csv has been skipped because it failed to be verified to be consistent between source and destination store. You can get more details about why sample1.csv becomes inconsistent is because it was being changed by other applications when ADF copy activity is copying at the same time. You can also see sample2.csv has been successfully copied from source to destination store.
+From the log file above, you can see sample1.csv has been skipped because it failed to be verified to be consistent between source and destination store. You can get more details about why sample1.csv becomes inconsistent is because it was being changed by other applications when ADF Copy activity is copying at the same time. You can also see sample2.csv has been successfully copied from source to destination store.
You can use multiple analysis engines to further analyze the log files. There are a few examples below to use SQL query to analyze the log file by importing csv log file to SQL database where the table name can be SessionLogDemo.
select OperationItem from SessionLogDemo where OperationName='FileSkip'
select TIMESTAMP, OperationItem, Message from SessionLogDemo where OperationName='FileSkip' ``` -- Give me the list of files skipped due to the same reason: "blob file does not exist".
+- Give me the list of files skipped due to the same reason: "blob file doesnΓÇÖt exist".
```sql select TIMESTAMP, OperationItem, Message from SessionLogDemo where OperationName='FileSkip' and Message like '%UserErrorSourceBlobNotExist%' ```
data-factory Copy Clone Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-clone-data-factory.md
Previously updated : 06/30/2020 Last updated : 01/26/2022 # Copy or clone a data factory in Azure Data Factory
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-azure-integration-runtime.md
Use the following steps to create an Azure IR using UI.
1. You'll see a pop-up notification when the creation completes. On the **Integration runtimes** page, make sure that you see the newly created IR in the list.
+> [!NOTE]
+> If you want to enable managed virtual network on Azure IR, please see [How to enable managed virtual network](managed-virtual-network-private-endpoint.md)
+ ## Use Azure IR Once an Azure IR is created, you can reference it in your Linked Service definition. Below is a sample of how you can reference the Azure Integration Runtime created above from an Azure Storage Linked Service:
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-access-strategies.md
Previously updated : 05/28/2020 Last updated : 01/26/2022 # Data access strategies
For more information, see the following related articles:
* [Supported data stores](./copy-activity-overview.md#supported-data-stores-and-formats) * [Azure Key Vault ΓÇÿTrusted ServicesΓÇÖ](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services) * [Azure Storage ΓÇÿTrusted Microsoft ServicesΓÇÖ](../storage/common/storage-network-security.md#trusted-microsoft-services)
-* [Managed identity for Data Factory](./data-factory-service-identity.md)
+* [Managed identity for Data Factory](./data-factory-service-identity.md)
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
You can choose whether to connect your Self-Hosted Integration Runtime (SHIR) to
You can change the selection anytime after creation from the data factory portal page on the Networking blade. After you enable private endpoints there, you must also add a private endpoint to the data factory.
-A private endpoint requires a virtual network and subnet for the link, and a virtual machine within the subnet, which will be used to run the Self-Hosted Integration Runtime (SHIR), connecting via the private endpoint link.
+A private endpoint requires a virtual network and subnet for the link. In this example, a virtual machine within the subnet will be used to run the Self-Hosted Integration Runtime (SHIR), connecting via the private endpoint link.
### Create the virtual network If you do not have an existing virtual network to use with your private endpoint link, you must create a one, and assign a subnet.
data-factory Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quota-increase.md
description: How to create a support request in the Azure portal for Azure Data
Previously updated : 03/10/2020 Last updated : 01/27/2022
Use the following steps to create a new support request from the Azure portal fo
![The Help + support link](./media/quota-increase/help-plus-support.png)
-1. In **Help + support**, select **New support request**.
+1. In **Help + support**, select **Create a support request**.
:::image type="content" source="./media/quota-increase/new-support-request.png" alt-text="Create a new support request":::
data-factory Bulk Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/bulk-copy-powershell.md
Previously updated : 10/31/2017 Last updated : 01/27/2022 # PowerShell script - copy multiple tables in bulk by using Azure Data Factory
data-factory Copy Azure Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/copy-azure-blob-powershell.md
Previously updated : 03/12/2020 Last updated : 01/27/2022 # Use PowerShell to create a data factory pipeline to copy data in the cloud
data-factory Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/hybrid-copy-powershell.md
Previously updated : 10/31/2017 Last updated : 01/27/2022 # Use PowerShell to create a data factory pipeline to copy data from SQL Server to Azure
data-factory Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/incremental-copy-powershell.md
Previously updated : 03/12/2020 Last updated : 01/27/2022 # PowerShell script - Incrementally load data by using Azure Data Factory
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/scripts/transform-data-spark-powershell.md
Previously updated : 09/12/2017 Last updated : 01/27/2022 # PowerShell script - transform data in cloud using Azure Data Factory
data-factory Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/whitepapers.md
Previously updated : 09/04/2019 Last updated : 01/26/2022 # Azure Data Factory whitepapers
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-reference-architectures.md
DDoS Protection Standard is designed [for services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). The following reference architectures are arranged by scenarios, with architecture patterns grouped together. > [!NOTE]
-> Protected resources include public IPs attached to an IaaS VM, Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric or an IaaS based Network Virtual Appliance (NVA). PaaS services (multitenant) are not supported at present. This includes Azure App Service Environment for PowerApps or API management in a virtual network with a public IP.
+> Protected resources include public IPs attached to an IaaS VM, Load Balancer (Classic & Standard Load Balancers), Application Gateway (including WAF) cluster, Firewall, Bastion, VPN Gateway, Service Fabric or an IaaS based Network Virtual Appliance (NVA). PaaS services (multitenant) are not supported at present. This includes Azure App Service Environment for Power Apps or API management in a virtual network with a public IP.
## Virtual machine (Windows/Linux) workloads
documentation.
> [!NOTE]
-> Azure App Service Environment for PowerApps or API management in a virtual network with a public IP are both not natively supported.
+> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
## Hub-and-spoke network topology with Azure Firewall and Azure Bastion
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Suspicious password access [seen multiple times]**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Informational| |**Suspicious password access**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}.|-|Informational| |**Suspicious PHP execution detected**<br>(VM_SuspectPhp)|Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run OS commands or PHP code from the command line using the PHP process. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.|Execution|Medium|
-|**Suspicious request to Kubernetes API**<br>(VM_KubernetesAPI)|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|Execution|Medium|
-|**Suspicious request to the Kubernetes Dashboard**<br>(VM_KubernetesDashboard) | Machine logs indicate that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | Lateral movement | Medium |
+|**Suspicious request to Kubernetes API**<br>(VM_KubernetesAPI)|Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container.|LateralMovement|Medium|
+|**Suspicious request to the Kubernetes Dashboard**<br>(VM_KubernetesDashboard) | Machine logs indicate that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. |LateralMovement| Medium |
|**Threat Intel Command Line Suspect Domain** <br> (VM_ThreatIntelCommandLineSuspectDomain) | The process 'PROCESSNAME' on 'HOST' connected to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred.| Initial Access | Medium | |**Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | |**Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **SSH server is running inside a container (Preview) (Preview)**<br>(K8S.NODE_ContainerSSH) | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium | | **Suspicious compilation detected (Preview)**<br>(K8S.NODE_SuspectCompilation) | Analysis of processes running within a container detected suspicious compilation. Attackers will often compile exploits to escalate privileges. | PrivilegeEscalation, Exploitation | Medium | | **Suspicious file timestamp modification (Preview)**<br>(K8S.NODE_TimestampTampering) | Analysis of host/device data detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
-| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium |
+| **Suspicious request to Kubernetes API (Preview)**<br>(K8S.NODE_KubernetesAPI) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
| **Suspicious request to the Kubernetes Dashboard (Preview)**<br>(K8S.NODE_KubernetesDashboard) | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | Execution | Medium | | **Potential crypto coin miner started (Preview)**<br>(K8S.NODE_CryptoCoinMinerExecution) | Analysis of processes running within a container detected a process being started in a way normally associated with digital currency mining. | Execution | Medium | | **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational |
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/just-in-time-access-usage.md
Lock down inbound traffic to your Azure Virtual Machines with Microsoft Defender
For a full explanation about how JIT works and the underlying logic, see [Just-in-time explained](just-in-time-access-overview.md).
+For a full explanation of the privilege requirements, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).
+ This page teaches you how to include JIT in your security program. You'll learn how to: - **Enable JIT on your VMs** - You can enable JIT with your own custom options for one or more VMs using Defender for Cloud, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure VMs by creating a rule in your network security group. - **Request access to a VM that has JIT enabled** - The goal of JIT is to ensure that even though your inbound traffic is locked down, Defender for Cloud still provides easy access to connect to VMs when needed. You can request access to a JIT-enabled VM from Defender for Cloud, Azure virtual machines, PowerShell, or the REST API. - **Audit the activity** - To ensure your VMs are secured appropriately, review the accesses to your JIT-enabled VMs as part of your regular security checks. -- ## Availability |Aspect|Details|
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/permissions.md
Title: Permissions in Microsoft Defender for Cloud | Microsoft Docs description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. Previously updated : 01/12/2022 Last updated : 01/27/2022 # Permissions in Microsoft Defender for Cloud
In addition to the built-in roles, there are two roles specific to Defender for
The following table displays roles and allowed actions in Defender for Cloud.
-| **Action** | [Security Reader](../role-based-access-control/built-in-roles.md#security-reader) / <br> [Reader](../role-based-access-control/built-in-roles.md#reader) | [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) / [Owner](../role-based-access-control/built-in-roles.md#owner)| [Contributor](../role-based-access-control/built-in-roles.md#contributor)| [Owner](../role-based-access-control/built-in-roles.md#owner)|
-|:-|:--:|:--:|::|::|::|
-||||**(Resource group level)**|**(Subscription level)**|**(Subscription level)**|
-| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | - | Γ£ö |
-| Edit security policy | - | Γ£ö | - | - | Γ£ö |
-| Enable / disable Microsoft Defender plans | - | Γ£ö | - | - | Γ£ö |
-| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
-| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
-| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-||||||
+| **Action** | **Action** | [Security Reader](../role-based-access-control/built-in-roles.md#security-reader) / <br> [Reader](../role-based-access-control/built-in-roles.md#reader) | [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) / [Owner](../role-based-access-control/built-in-roles.md#owner) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+|:-|:-:|:-:|:-:|:-:|:-:|
+| | | | **(Resource group level)** | **(Subscription level)** | **(Subscription level)** |
+| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | Γ£ö | Γ£ö |
+| Edit security policy | - | Γ£ö | - | Γ£ö | Γ£ö |
+| Enable / disable Microsoft Defender plans | - | Γ£ö | - | Γ£ö | Γ£ö |
+| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö |
+| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| | | | | |
For **auto provisioning**, the specific role required depends on the extension you're deploying. For full details, check the tab for the specific extension in the [availability table on the auto provisioning quick start page](enable-data-collection.md#availability).
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/deploy-nested-template-environments.md
Title: Deploy nested template environments
-description: Learn how to deploy nested Azure Resource Manager templates to provide environments with Azure DevTest Labs.
+ Title: Deploy nested ARM template environments
+description: Learn how to nest Azure Resource Manager (ARM) templates to deploy Azure DevTest Labs environments.
Previously updated : 06/26/2020 Last updated : 01/26/2022
-# Deploy nested Azure Resource Manager templates for testing environments
-A nested deployment allows you to execute other Azure Resource Manager templates from within a main Resource Manager template. You can decompose your deployment into a set of targeted and purpose-specific templates. A nested deployment provides testing, reuse, and readability benefits. The article [Using linked templates when deploying Azure resources](../azure-resource-manager/templates/linked-templates.md) provides a good overview of this solution with several code samples. This article provides an example that's specific to Azure DevTest Labs.
+# Deploy DevTest Labs environments by using nested templates
-## Key parameters
-While you can create your own Resource Manager template from scratch, we recommend that you use the [Azure Resource Group project](../azure-resource-manager/templates/create-visual-studio-deployment-project.md) in Visual Studio. This project makes it easy to develop and debug templates. When you add a nested deployment resource to azuredeploy.json, Visual Studio adds several items to make the template more flexible. These items include:
+A nested deployment runs secondary Azure Resource Manager (ARM) templates from within a main template. This article shows an example of nesting templates to deploy an Azure DevTest Labs environment. DevTest Labs environments contain multiple infrastructure-as-a-service (IaaS) virtual machines (VMs) with platform-as-a-service (PaaS) resources installed. You can provision the PaaS resources and VMs by using ARM templates.
-- The subfolder with the secondary template and parameters file-- Variable names within the main template file-- Two parameters for the storage location for the new files. The **_artifactsLocation** and **_artifactsLocationSasToken** are the key parameters that the DevTest Labs uses.
+Decomposing a deployment into a set of targeted, purpose-specific templates provides testing, reuse, and readability benefits. For general information about nested templates, including code samples, see [Using linked and nested templates when deploying Azure resources](../azure-resource-manager/templates/linked-templates.md).
-If you aren't familiar with how DevTest Labs works with environments, see [Create multi-VM environments and PaaS resources with Azure Resource Manager templates](devtest-lab-create-environment-from-arm.md). Your templates are stored in the repository linked to the lab in DevTest Labs. When you create a new environment with those templates, the files move into an Azure Storage container in the lab. To locate and copy the nested files, DevTest Labs identifies the _artifactsLocation and _artifactsLocationSasToken parameters, and copies the subfolders up to the storage container. Then, DevTest Labs automatically inserts the location and Shared Access Signature (SaS) token into parameters.
+## Deploy nested templates with Visual Studio
+
+The Azure Resource Group project template in Visual Studio makes it easy to develop and debug ARM templates. When you add a nested template to the main *azuredeploy.json* template file, Visual Studio adds the following items to make the template more flexible:
+
+- A subfolder with the secondary template and parameters files
+- Variable names in the main template file
+- Two key parameters, `_artifactsLocation` and `_artifactsLocationSasToken`
+
+In DevTest Labs, you store ARM templates in a Git repository that you link to the lab. When you use one of the linked repository templates to create a new environment, the deployment copies the template files into an Azure Storage container in the lab. When you add a nested template resource to the repository and main template file, Visual Studio identifies the `_artifactsLocation` and `_artifactsLocationSasToken` values, copies the subfolders to the storage container, and inserts the location and Shared Access Signature (SaS) token into the parameters files.
+
+## Nested template folder structure
+
+In the following template example, the Git repository folder has a subfolder, *nestedtemplates*, with the nested template files *NestOne.json* and *NestOne.parameters.json*. The *azuredeploy.json* main template file builds the URI for the secondary templates by using the artifacts location, nested template folder, and nested template filename. The URI for the parameters file is the artifacts location, nested template folder, and nested template parameters file. You can add more nested template subfolders to the primary folder, but at only one level of nesting.
+
+The following screenshot shows the project structure in Visual Studio:
+
+![Screenshot that shows the nested template project structure in Visual Studio.](./media/deploy-nested-template-environments/visual-studio-project-structure.png)
## Nested deployment example
-Here's a simple example of a nested deployment:
+
+The following example shows the main *azuredeploy.json* ARM template file for the nested deployment:
```json
Here's a simple example of a nested deployment:
"outputs": {} ```
-The folder in the repository containing this template has a subfolder `nestedtemplates` with the files **NestOne.json** and **NestOne.parameters.json**. In the **azuredeploy.json**, URI for the template is built using the artifacts location, nested template folder, nested template file name. Similarly, URI for the parameters is built using the artifacts location, nested template folder, and parameter file for the nested template.
-
-Here's the image of the same project structure in Visual Studio:
-
-![Screenshot of project structure in Visual Studio.](./media/deploy-nested-template-environments/visual-studio-project-structure.png)
-
-You can add more folders in the primary folder, but not any deeper than a single level.
- ## Next steps
-See the following articles for details about environments:
-- [Create multi-VM environments and PaaS resources with Azure Resource Manager templates](devtest-lab-create-environment-from-arm.md)-- [Configure and use public environments in Azure DevTest Labs](devtest-lab-configure-use-public-environments.md)-- [Connect an environment to your lab's virtual network in Azure DevTest Labs](connect-environment-lab-virtual-network.md)
+- For more information about DevTest Labs environments, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md).
+- For more information about using the Visual Studio Azure Resource Group project template, including code samples, see [Creating and deploying Azure resource groups through Visual Studio](../azure-resource-manager/templates/create-visual-studio-deployment-project.md).
+
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-devtest-user.md
Title: Add owners and users
-description: Add owners and users in Azure DevTest Labs using either the Azure portal or PowerShell
+ Title: Add lab owners and users with role-based access control (RBAC)
+description: Learn about the Azure DevTest Labs Owner, Contributor, and DevTest Labs User roles, and how to add members to lab roles by using the Azure portal or Azure PowerShell.
Previously updated : 06/26/2020 Last updated : 01/26/2022
-# Add owners and users in Azure DevTest Labs
-
-Access in Azure DevTest Labs is controlled by [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Using Azure RBAC, you can segregate duties within your team into *roles* where you grant only the amount of access necessary to users to perform their jobs. Three of these Azure roles are *Owner*, *DevTest Labs User*, and *Contributor*. In this article, you learn what actions can be performed in each of the three main Azure roles. From there, you learn how to add users to a lab - both via the portal and via a PowerShell script, and how to add users at the subscription level.
-
-## Actions that can be performed in each role
-There are three main roles that you can assign a user:
-
-* Owner
-* DevTest Labs User
-* Contributor
-
-The following table illustrates the actions that can be performed by users in each of these roles:
-
-| **Actions users in this role can perform** | **DevTest Labs User** | **Owner** | **Contributor** |
-| | | | |
-| **Lab tasks** | | | |
-| Add users to a lab |No |Yes |No |
-| Update cost settings |No |Yes |Yes |
-| **VM base tasks** | | | |
-| Add and remove custom images |No |Yes |Yes |
-| Add, update, and delete formulas |Yes |Yes |Yes |
-| Enable Marketplace images |No |Yes |Yes |
-| **VM tasks** | | | |
-| Create VMs |Yes |Yes |Yes |
-| Start, stop, and delete VMs |Only VMs created by the user |Yes |Yes |
-| Update VM policies |No |Yes |Yes |
-| Add/remove data disks to/from VMs |Only VMs created by the user |Yes |Yes |
-| **Artifact tasks** | | | |
-| Add and remove artifact repositories |No |Yes |Yes |
-| Apply artifacts |Yes |Yes |Yes |
+# Add lab owners, contributors, and users in Azure DevTest Labs
-> [!NOTE]
-> When a user creates a VM, that user is automatically assigned to the **Owner** role of the created VM.
->
->
+Azure DevTest Labs uses Azure [role-based access control](../role-based-access-control/overview.md) (Azure RBAC) to define roles that have only the access necessary to do specific lab tasks. DevTest Labs has three built-in roles: *Owner*, *Contributor*, and *DevTest Labs User*. This article describes the tasks each role can do, and how to add members to lab roles by using the Azure portal or an Azure PowerShell script.
-## Add an owner or user at the lab level
-Owners and users can be added at the lab level via the Azure portal.
-A user can be an external user with a valid [Microsoft account (MSA)](./devtest-lab-faq.yml).
-The following steps guide you through the process of adding an owner or user to a lab in Azure DevTest Labs:
+## Actions each role can take
-1. Sign in to the [Azure portal](https://portal.azure.com) as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+Lab Owner, Contributor, and DevTest Labs User roles can take the following actions in DevTest Labs:
-1. Open the desired resource group and select **DevTest Labs**.
+### Owner
-1. In the navigation menu, select **Access control (IAM)**.
+The lab Owner role can take all of the following actions:
-1. Select **Add** > **Add role assignment**.
+Lab tasks:
+- Add users to the lab.
+- Update cost settings.
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+Virtual machine (VM) base tasks:
+- Add and remove custom images.
+- Add, update, and delete formulas.
+- Enable Marketplace images.
-1. On the **Role** tab, select the **OWNER** or **USER** role.
+VM tasks:
+- Create VMs.
+- Start, stop, or delete VMs.
+- Update VM policies.
+- Add or remove VM data disks.
- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+Artifact tasks:
+- Add and remove artifact repositories.
+- Apply artifacts to VMs.
-1. On the **Members** tab, select the user you want to give the desired role to.
+### Contributor
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+The lab Contributor role can take all the same actions as lab Owner, except can't add users to labs.
+### DevTest Labs User
-## Add an external user to a lab using PowerShell
+The DevTest Labs User role can take the following actions in DevTest Labs:
+- Add, update, and delete VM base formulas.
+- Create VMs.
+- Start, stop, or delete VMs the user creates.
+- Add or remove data disks from VMs the user creates.
+- Apply artifacts to VMs.
+
+> [!NOTE]
+> Lab users automatically have the **Owner** role on VMs they create.
-In addition to adding users in the Azure portal, you can add an external user to your lab using a PowerShell script.
-In the following example, modify the parameter values under the **Values to change** comment.
-You can retrieve the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab blade in the Azure portal.
+## Add Owners, Contributors, or DevTest Labs Users
+
+A lab owner can add members to lab roles by using the Azure portal or an Azure PowerShell script. The user to add can be an external user with a valid [Microsoft account (MSA)](./devtest-lab-faq.yml).
+
+Azure permissions propagate from parent scope to child scope. Owners of an Azure subscription that contains labs are automatically owners of the subscription's DevTest Labs service, labs, and lab VMs and resources. Subscription owners can add Owners, Contributors, and DevTest Labs Users to labs in the subscription.
> [!NOTE]
-> The sample script assumes that the specified user has been added as a guest to the Active Directory, and will fail if that is not the case. To add a user not in the Active Directory to a lab, use the Azure portal to assign the user to a role as illustrated in the section, [Add an owner or user at the lab level](#add-an-owner-or-user-at-the-lab-level).
->
->
+> Added lab Owners' scope of administration is narrower than the subscription owner's scope. Added Owners don't have full access to some resources that the DevTest Labs service creates.
-```azurepowershell
-# Add an external user in DevTest Labs user role to a lab
-# Ensure that guest users can be added to the Azure Active directory:
-# https://azure.microsoft.com/documentation/articles/active-directory-create-users/#set-guest-user-access-policies
+### Prerequisites
-# Values to change
-$subscriptionId = "<Enter Azure subscription ID here>"
-$labResourceGroup = "<Enter lab's resource name here>"
-$labName = "<Enter lab name here>"
-$userDisplayName = "<Enter user's display name here>"
+To add members to a lab, you must:
-# Log into your Azure account
-Connect-AzAccount
+- Be an Owner of the lab, either directly or by inheritance as a subscription owner.
+- Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator).
-# Select the Azure subscription that contains the lab.
-# This step is optional if you have only one subscription.
-Select-AzSubscription -SubscriptionId $subscriptionId
+### Add a lab member by using the Azure portal
-# Retrieve the user object
-$adObject = Get-AzADUser -SearchString $userDisplayName
+To add a member:
-# Create the role assignment.
-$labId = ('subscriptions/' + $subscriptionId + '/resourceGroups/' + $labResourceGroup + '/providers/Microsoft.DevTestLab/labs/' + $labName)
-New-AzRoleAssignment -ObjectId $adObject.Id -RoleDefinitionName 'DevTest Labs User' -Scope $labId
-```
+- At the subscription level, open the subscription page.
+- At the lab level, open the resource group that has the lab, and select the lab from the list of resources.
+
+1. In the left navigation for the subscription or lab, select **Access control (IAM)**.
-## Add an owner or user at the subscription level
-Azure permissions are propagated from parent scope to child scope in Azure. Therefore, owners of an Azure subscription that contains labs are automatically owners of those labs. They also own the VMs and other resources created by the lab's users, and the Azure DevTest Labs service.
+1. Select **Add** > **Add role assignment**.
-You can add additional owners to a lab via the lab's blade in the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-However, the added owner's scope of administration is more narrow than the subscription owner's scope.
-For example, the added owners do not have full access to some of the resources that are created in the subscription by the DevTest Labs service.
+ ![Screenshot that shows an access control (IAM) page with the Add role assignment menu open.](media/devtest-lab-add-devtest-user/add-role-assignment-menu-generic.png)
-To add an owner to an Azure subscription, follow these steps:
+1. On the **Add Role Assignment** page, select the **Owner**, **Contributor**, or **DevTest Labs User** role, and then select **Next**.
-1. Sign in to the [Azure portal](https://portal.azure.com) as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+ ![Screenshot that shows the Add role assignment page with the Role tab selected.](media/devtest-lab-add-devtest-user/add-role-assignment-role-generic.png)
-1. Open the desired Subscription group.
+1. On the **Members** tab, select **Select members**.
-1. In the navigation menu, select **Access control (IAM)**.
+1. On the **Select members** screen, select the member you want to add, and then select **Select**.
-1. Select **Add** > **Add role assignment**.
+1. Select **Review + assign**, and after reviewing the details, select **Review + assign** again.
- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png)
+<a name="add-an-external-user-to-a-lab-using-powershell"></a>
+### Add a DevTest Labs User to a lab by using Azure PowerShell
-1. On the **Role** tab, select the **OWNER** role.
- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png)
+You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Azure Active Directory (Azure AD). For information about adding an external user to Azure AD as a guest, see [Add a new guest user](/active-directory/fundamentals/add-users-azure-active-directory#add-a-new-guest-user). If the user isn't in Azure AD, use the portal procedure instead.
-1. On the **Members** tab, select the user you want to give the owner role to.
+In the following script, update the parameter values under the `# Values to change` comment. You can get the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab's main page in the Azure portal.
+
+```azurepowershell
+# Add an external user to a lab user role in DevTest Labs.
+# Make sure the guest user is added to Azure AD.
+
+# Values to change
+$subscriptionId = "<Azure subscription ID>"
+$labResourceGroup = "<Lab's resource group name>"
+$labName = "<Lab name>"
+$userDisplayName = "<User's display name>"
+
+# Log into your Azure account.
+Connect-AzAccount
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+# Select the Azure subscription that contains the lab. This step is optional if you have only one subscription.
+Select-AzSubscription -SubscriptionId $subscriptionId
+
+# Get the user object.
+$adObject = Get-AzADUser -SearchString $userDisplayName
+
+# Create the role assignment.
+$labId = ('subscriptions/' + $subscriptionId + '/resourceGroups/' + $labResourceGroup + '/providers/Microsoft.DevTestLab/labs/' + $labName)
+New-AzRoleAssignment -ObjectId $adObject.Id -RoleDefinitionName 'DevTest Labs User' -Scope $labId
+```
+## Next steps
+- [Customize permissions with custom roles](devtest-lab-grant-user-permissions-to-specific-lab-policies.md)
+- [Automate adding lab users](automate-add-lab-user.md)
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/secure-cloud-network.md
Previously updated : 03/19/2021 Last updated : 01/26/2022 # Tutorial: Secure your virtual hub using Azure Firewall Manager
-Using Azure Firewall Manager, you can create secured virtual hubs to secure your cloud network traffic destined to private IP addresses, Azure PaaS, and the Internet. Traffic routing to the firewall is automated, so there's no need to create user defined routes (UDRs).
+Using Azure Firewall Manager, you can create secured virtual hubs to secure your cloud network traffic destined to private IP addresses, Azure PaaS, and the Internet. Traffic routing to the firewall is automated, so there's no need to create user-defined routes (UDRs).
![secure the cloud network](media/secure-cloud-network/secure-cloud-network.png)
The two virtual networks will each have a workload server in them and will be pr
3. For **Region**, select **(US) East US**. 4. Select **Next: IP Addresses**. 1. For **Address space**, type **10.0.0.0/16**.
-3. Under **Subnet name**, select **default**.
-4. For **Subnet name**, type **Workload-01-SN**.
-5. For **Subnet address range**, type **10.0.1.0/24**.
-6. Select **Save**.
+1. Select **Add subnet**.
+1. For **Subnet name**, type **Workload-01-SN**.
+1. For **Subnet address range**, type **10.0.1.0/24**.
+1. Select **Add**.
1. Select **Review + create**.
-2. Select **Create**.
+1. Select **Create**.
Repeat this procedure to create another similar virtual network:
Create your secured virtual hub using Firewall Manager.
1. From the Azure portal home page, select **All services**. 2. In the search box, type **Firewall Manager** and select **Firewall Manager**.
-3. On the **Firewall Manager** page, select **View secured virtual hubs**.
-4. On the **Firewall Manager | Secured virtual hubs** page, select **Create new secured virtual hub**.
+3. On the **Firewall Manager** page under **Deployments**, select **Virtual hubs**.
+4. On the **Firewall Manager | Virtual hubs** page, select **Create new secured virtual hub**.
5. For **Resource group**, select **fw-manager-rg**. 7. For **Region**, select **East US**. 1. For the **Secured virtual hub name**, type **Hub-01**. 2. For **Hub address space**, type **10.2.0.0/16**.
-3. For the new vWAN name, type **Vwan-01**.
+3. For the new virtual WAN name, type **Vwan-01**.
4. Leave the **Include VPN gateway to enable Trusted Security Partners** check box cleared. 5. Select **Next: Azure Firewall**.
-6. Accept the default **Azure Firewall** **Enabled** setting and then select **Next: Trusted Security Partner**.
-7. Accept the default **Trusted Security Partner** **Disabled** setting, and select **Next: Review + create**.
-8. Select **Create**.
+6. Accept the default **Azure Firewall** **Enabled** setting.
+1. For **Azure Firewall tier**, select **Standard**.
+1. Select **Next: Trusted Security Partner**.
+1. Accept the default **Trusted Security Partner** **Disabled** setting, and select **Next: Review + create**.
+1. Select **Create**.
It takes about 30 minutes to deploy.
Repeat to connect the **Spoke-02** virtual network: connection name - **hub-spok
## Deploy the servers 1. On the Azure portal, select **Create a resource**.
-2. Select **Windows Server 2016 Datacenter** in the **Popular** list.
+2. Select **Windows Server 2019 Datacenter** in the **Popular** list.
3. Enter these values for the virtual machine: |Setting |Value |
After the servers are deployed, select a server resource, and in **Networking**
A firewall policy defines collections of rules to direct traffic on one or more Secured virtual hubs. You'll create your firewall policy and then secure your hub.
-1. From Firewall Manager, select **View Azure Firewall policies**.
+1. From Firewall Manager, select **Azure Firewall policies**.
2. Select **Create Azure Firewall Policy**. 1. For **Resource group**, select **fw-manager-rg**. 1. Under **Policy details**, for the **Name** type **Policy-01** and for **Region** select **East US**.
+1. For **Policy tier**, select **Standard**.
1. Select **Next: DNS Settings**.
-1. Select **Next: TLS Inspection (preview)**.
+1. Select **Next: TLS Inspection**.
1. Select **Next : Rules**. 1. On the **Rules** tab, select **Add a rule collection**. 1. On the **Add a rule collection** page, type **App-RC-01** for the **Name**.
Associate the firewall policy with the hub.
1. From Firewall Manager, select **Azure Firewall Policies**. 1. Select the check box for **Policy-01**.
-1. Select **Manage associations/Associate hubs**.
+1. Select **Manage associations**, **Associate hubs**.
1. Select **hub-01**. 1. Select **Add**.
Now you must ensure that network traffic gets routed through your firewall.
4. Under **Internet traffic**, select **Azure Firewall**. 5. Under **Private traffic**, select **Send via Azure Firewall**. 1. Select **Save**.
+1. Select **OK** on the **Warning** dialog.
+ It takes a few minutes to update the route tables. 1. Verify that the two connections show Azure Firewall secures both Internet and private traffic.
So now you've verified that the firewall network rule is working:
## Clean up resources
-When you are done testing your firewall resources, delete the **fw-manager-rg** resource group to delete all firewall-related resources.
+When youΓÇÖre done testing your firewall resources, delete the **fw-manager-rg** resource group to delete all firewall-related resources.
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-preview.md
Previously updated : 01/24/2022 Last updated : 01/27/2022
As new features are released to preview, some of them will be behind a feature f
This article will be updated to reflect the features that are currently in preview with instructions to enable them. When the features move to General Availability (GA), they'll be available to all customers without the need to enable a feature flag.
-Commands are run in Azure PowerShell to enable the features. For the feature to immediately take effect, an operation needs to be run on the firewall. This can be a rule change (least intrusive), a setting change, or a stop/start operation. Otherwise, the firewall/s is updated with the feature within several days.
- ## Preview features The following features are available in preview.
Currently, a network rule hit event shows the following attributes in the logs:
- Rule collection - Rule name
+To enable the Network Rule name Logging feature, the following commands need to be run in Azure PowerShell. For the feature to immediately take effect, an operation needs to be run on the firewall. This can be a rule change (least intrusive), a setting change, or a stop/start operation. Otherwise, the firewall/s is updated with the feature within several days.
+ Run the following Azure PowerShell commands to configure Azure Firewall network rule name logging: ```azurepowershell
As more applications move to the cloud, the performance of the network elements
This feature significantly increases the throughput of Azure Firewall Premium. For more details, see [Azure Firewall performance](firewall-performance.md).
+To enable the Azure Firewall Premium Performance boost feature, run the following commands in Azure PowerShell. Stop and start the firewall for the feature to take effect immediatately. Otherwise, the firewall/s is updated with the feature within several days.
+ Currently, the performance boost feature isn't recommended for SecureHub Firewalls. Refer back to this article for the latest updates as we work to change this recommendation. Also, this setting has no effect on Standard Firewalls. Run the following Azure PowerShell commands to configure the Azure Firewall Premium performance boost:
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/integrate-with-nat-gateway.md
Title: Scale SNAT ports with Azure NAT Gateway
-description: You can integrate Azure Firewall with NAT Gateway to increase SNAT ports.
+ Title: Scale SNAT ports with Azure Virtual Network NAT
+description: You can integrate Azure Firewall with a NAT gateway to increase SNAT ports.
-+ Previously updated : 04/23/2021- Last updated : 01/27/2022+
-# Scale SNAT ports with Azure NAT Gateway
+# Scale SNAT ports with Azure Virtual Network NAT
Azure Firewall provides 2048 SNAT ports per public IP address configured, and you can associate up to [250 public IP addresses](./deploy-multi-public-ip-powershell.md). Depending on your architecture and traffic patterns, you might need more than the 512,000 available SNAT ports with this configuration. For example, when you use it to protect large [Windows Virtual Desktop deployments](./protect-azure-virtual-desktop.md) that integrate with Microsoft 365 Apps. Another challenge with using a large number of public IP addresses is when there are downstream IP address filtering requirements. Azure Firewall randomly selects the source public IP address to use for a connection, so you need to allow all public IP addresses associated with it. Even if you use [Public IP address prefixes](../virtual-network/ip-services/public-ip-address-prefix.md) and you need to associate 250 public IP addresses to meet your outbound SNAT port requirements, you still need to create and allow 16 public IP address prefixes.
-A better option to scale outbound SNAT ports is to use [NAT gateway resource](../virtual-network/nat-gateway/nat-overview.md). It provides 64,000 SNAT ports per public IP address and supports up to 16 public IP addresses, effectively providing up to 1,024,000 outbound SNAT ports.
+A better option to scale outbound SNAT ports is to use an [Azure Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) as a NAT gateway. It provides 64,000 SNAT ports per public IP address and supports up to 16 public IP addresses, effectively providing up to 1,024,000 outbound SNAT ports.
-When a NAT gateway resource is associated with an Azure Firewall subnet, all outbound Internet traffic automatically uses the public IP address of the NAT gateway. There is no need to configure [User Defined Routes](../virtual-network/tutorial-create-route-table-portal.md). Response traffic uses the Azure Firewall public IP address to maintain flow symmetry. If there are multiple IP addresses associated with the NAT gateway the IP address is randomly selected. It isn't possible to specify what address to use.
+When a NAT gateway resource is associated with an Azure Firewall subnet, all outbound Internet traffic automatically uses the public IP address of the NAT gateway. ThereΓÇÖs no need to configure [User Defined Routes](../virtual-network/tutorial-create-route-table-portal.md). Response traffic uses the Azure Firewall public IP address to maintain flow symmetry. If there are multiple IP addresses associated with the NAT gateway, the IP address is randomly selected. It isn't possible to specify what address to use.
-There is no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address.
+ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address.
> [!NOTE]
-> Using Azure NAT Gateway is currently incompatible with Azure Firewall if you have deployed your [Azure Firewall across multiple availability zones](deploy-availability-zone-powershell.md).
+> Using Azure Virtual Network NAT is currently incompatible with Azure Firewall if you have deployed your [Azure Firewall across multiple availability zones](deploy-availability-zone-powershell.md).
+>
+> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
-## Associate NAT gateway with Azure Firewall subnet - Azure PowerShell
+## Associate a NAT gateway with an Azure Firewall subnet - Azure PowerShell
The following example creates and attaches a NAT gateway with an Azure Firewall subnet using Azure PowerShell.
$firewallSubnet.NatGateway = $natGateway
$virtualNetwork | Set-AzVirtualNetwork ```
-## Associate NAT gateway with Azure Firewall subnet - Azure CLI
+## Associate a NAT gateway with an Azure Firewall subnet - Azure CLI
The following example creates and attaches a NAT gateway with an Azure Firewall subnet using Azure CLI.
az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet --
## Next steps -- [Designing virtual networks with NAT gateway resources](../virtual-network/nat-gateway/nat-gateway-resource.md)
+- [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md)
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-overview.md
Previously updated : 03/09/2021 Last updated : 01/27/2022 # Customer intent: As an IT admin, I want to learn about Front Door and what I can use it for.
With Front Door you can build, operate, and scale out your dynamic web applicati
Key features included with Front Door:
-* Accelerated application performance by using **[split TCP](front-door-routing-architecture.md#splittcp)**-based **[anycast protocol](front-door-routing-architecture.md#anycast)**.
+* Accelerated application performance by using **[split TCP](front-door-traffic-acceleration.md?pivots=front-door-classic#splittcp)**-based **[anycast protocol](front-door-traffic-acceleration.md?pivots=front-door-classic#anycast)**.
* Intelligent **[health probe](front-door-health-probes.md)** monitoring for backend resources.
frontdoor Front Door Routing Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-routing-architecture.md
na Previously updated : 09/28/2020 Last updated : 01/27/2022
+zone_pivot_groups: front-door-tiers
# Routing architecture overview
-When Azure Front Door receives your client requests, it will do one of two things. Either answer them if you enable caching or forward them to the appropriate application backend as a reverse proxy.
+Front Door traffic routing takes place over multiple stages. First, traffic is routed from the client to Front Door. Then, Front Door uses your configuration to determine the origin to send the traffic to. The Front Door web application firewall, routing rules, rules engine, and caching configuration all affect the routing process.
-## <a name = "anycast"></a>Selecting the Front Door environment for traffic routing (Anycast)
+The following diagram illustrates the routing architecture:
-Traffic routed to the Azure Front Door environments uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic, which allows for user requests to reach the closest environment in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of Split TCP. Front Door organizes its environments into primary and fallback "rings". The outer ring has environments that are closer to users, offering lower latencies. The inner ring has environments that can handle the failover for the outer ring environment in case any issues happen. The outer ring is the preferred target for all traffic and the inner ring is to handle traffic overflow from the outer ring. Each frontend host or domain served by Front Door gets assigned a primary VIP (Virtual Internet Protocol addresses), which gets announced by environments in both the inner and outer ring. A fallback VIP is only announced by environments in the inner ring.
-This architecture ensures that requests from your end users always reach the closest Front Door environment. If the preferred Front Door environment is unhealthy, all traffic automatically moves to the next closest environment.
+![Diagram that shows the Front Door routing architecture, including each step and decision point.](media/front-door-routing-architecture/routing-process-standard-premium.png)
-## <a name = "splittcp"></a>Connecting to Front Door environment (Split TCP)
-[Split TCP](https://en.wikipedia.org/wiki/Performance-enhancing_proxy) is a technique to reduce latencies and TCP problems by breaking a connection that would incur a high round-trip time into smaller pieces. With Front Door environments closer to end users, TCP connections terminates inside the Front Door environment. A TCP connection that has a large round-trip time (RTT) to the application backend gets split into two separate connections. The "short connection" between the end user and the Front Door environment means the connection gets established over three short roundtrips instead of three long round trips, which results in saving latency. The "long connection" between the Front Door environment and the backend can be pre-established and then reused across other end users requests save connectivity time. The effect of Split TCP is multiplied when establishing a SSL/TLS (Transport Layer Security) connection as there are more round trips to secure a connection.
-## Processing request to match a routing rule
-After establishing a connection and completing a TLS handshake, the first step after a request lands on a Front Door environment is to match it to routing rule. The matching is determined by configurations on Front Door to which particular routing rule to match the request to. Read about how Front Door does [route matching](front-door-route-matching.md) to learn more.
+![Diagram that shows the Front Door routing architecture, including each step and decision point.](media/front-door-routing-architecture/routing-process-classic.png)
-## Identifying available backends in the backend pool for the routing rule
-Once Front Door has matched a routing rule for an incoming request, the next step is to get health probe status for the backend pool associated with the routing rule if there's no caching. Read about how Front Door monitors backend health using [Health Probes](front-door-health-probes.md) to learn more.
-## Forwarding the request to your application backend
-Finally, assuming caching isn't configured, the user request is forwarded to the "best" backend based on your [routing method](front-door-routing-methods.md) configuration.
+The rest of this article describes these steps in detail.
+
+## Select and connect to the Front Door edge location
+
+The user or client application initiates a connection to Front Door. The connection terminates at an edge location close to the user. Front Door's edge location processes the request.
+
+For more information about how requests are made to Front Door, see [Front Door traffic acceleration](front-door-traffic-acceleration.md).
++
+## Match request to a Front Door profile
+
+When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door profile. If the request is using a [custom domain name](standard-premium/how-to-add-custom-domain.md), the domain name must be registered with Front Door to enable requests to be matched to your profile.
+++
+## Match request to a front door
+
+When Front Door receives an HTTP request, it uses the request's `Host` header to match the request to the correct customer's Front Door instance. If the request is using a [custom domain name](front-door-custom-domain.md), the domain name must be registered with Front Door to enable requests to be matched to your front door.
++
+The client and server perform a TLS handshake using the TLS certificate you've configured for your custom domain name, or by using the Front Door certificate when the `Host` header ends with `*.azurefd.net`.
+
+## Evaluate WAF rules
++
+If your domain has enabled the Web Application Firewall, WAF rules are evaluated.
+++
+If your frontend has enabled the Web Application Firewall, WAF rules are evaluated.
++
+If a rule has been violated, Front Door returns an error to the client and the request processing stops.
++
+## Match a route
+
+Front Door matches the request to a route. Learn more about the [route matching process](front-door-route-matching.md).
+
+The route specifies the [origin group](standard-premium/concept-origin.md) that the request should be sent to.
+++
+## Match a routing rule
+
+Front Door matches the request to a routing rule. Learn more about the [route matching process](front-door-route-matching.md).
+
+The route specifies the [backend pool](front-door-backend-pool.md) that the request should be sent to.
+++
+## Evaluate rule sets
+
+If you have defined [rule sets](standard-premium/concept-rule-set.md) for the route, they're executed in the order they're configured. [Rule sets can override the origin group](standard-premium/concept-rule-set-actions.md#OriginGroupOverride) specified in a route. Rule sets can also trigger a redirection response to the request instead of forwarding it to an origin.
+++
+## Evaluate rules engines
+
+If you have defined [rules engines](front-door-rules-engine.md) for the route, they're executed in the order they're configured. [Rules engines can override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) specified in a routing rule. Rules engines can also trigger a redirection response to the request instead of forwarding it to a backend.
++
+## Return cached response
++
+If the Front Door routing rule has [caching](standard-premium/concept-caching.md) enabled, and the Front Door edge location's cache includes a valid response for the request, then Front Door returns the cached response.
+
+If caching is disabled or no response is available, the request is forwarded to the origin.
+++
+If the Front Door routing rule has [caching](front-door-caching.md) enabled, and the Front Door edge location's cache includes a valid response for the request, then Front Door returns the cached response.
+
+If caching is disabled or no response is available, the request is forwarded to the backend.
+++
+## Select origin
+
+Front Door selects an origin to use within the origin group. Origin selection is based on several factors, including:
+
+- The health of each origin, which Front Door monitors by using [health probes](front-door-health-probes.md).
+- The [routing method](front-door-routing-methods.md) for your origin group.
+- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity).
+
+## Forward request to origin
+
+Finally, the request is forwarded to the origin.
+++
+## Select backend
+
+Front Door selects a backend to use within the backend pool. Backend selection is based on several factors, including:
+
+- The health of each backend, which Front Door monitors by using [health probes](front-door-health-probes.md).
+- The [routing method](front-door-routing-methods.md) for your backend pool.
+- Whether you have enabled [session affinity](front-door-routing-methods.md#affinity).
+
+## Forward request to backend
+
+Finally, the request is forwarded to the backend.
+ ## Next steps +
+- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+++ - Learn how to [create a Front Door](quickstart-create-front-door.md).+
frontdoor Front Door Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-routing-methods.md
# Front Door routing methods
-Azure Front Door supports different kinds of traffic-routing methods to determine how to route your HTTP/HTTPS traffic to different service endpoints. When your client requests reaching Front Door, the configured routing method gets applied to ensure the requests are forwarded to the best backend instance.
+Azure Front Door supports different kinds of traffic-routing methods to determine how to route your HTTP/HTTPS traffic to different backends. When your client requests reaching Front Door, the configured routing method gets applied to ensure the requests are forwarded to the best backend.
There are four traffic routing methods available in Front Door:
There are four traffic routing methods available in Front Door:
All Front Door configurations include monitoring of backend health and automated instant global failover. For more information, see [Front Door Backend Monitoring](front-door-health-probes.md). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
+> [!NOTE]
+> When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) for a request. The backend pool set by the rules engine overrides the routing process described in this article.
+ ## <a name = "latency"></a>Lowest latencies based traffic-routing Deploying backends in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. The default traffic-routing method for your Front Door configuration forwards requests from your end users to the closest backend of the Front Door environment that received the request. Combined with the Anycast architecture of Azure Front Door, this approach ensures that each of your end users get maximum performance personalized based on their location.
frontdoor Front Door Traffic Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-traffic-acceleration.md
+
+ Title: Azure Front Door - traffic acceleration | Microsoft Docs
+description: This article helps you understand how Front Door accelerates traffic.
+
+documentationcenter: ''
+++
+ na
+ Last updated : 01/27/2022+
+zone_pivot_groups: front-door-tiers
++
+# Traffic acceleration
++
+Front Door optimizes the traffic path from the end user to the origin server. This article describes how traffic is routed from the user to Front Door and to the origin.
+++
+Front Door optimizes the traffic path from the end user to the backend server. This article describes how traffic is routed from the user to Front Door and to the backend.
++
+## <a name = "anycast"></a>Select the Front Door edge location for the request (Anycast)
+
+Globally, [Front Door has over 150 edge locations](edge-locations-by-region.md), or points of presence (PoPs), located in many countries and regions. Every Front Door PoP can serve traffic for any request.
+
+Traffic routed to the Azure Front Door edge locations uses [Anycast](https://en.wikipedia.org/wiki/Anycast) for both DNS (Domain Name System) and HTTP (Hypertext Transfer Protocol) traffic. Anycast allows for user requests to reach the closest edge location in the fewest network hops. This architecture offers better round-trip times for end users by maximizing the benefits of [Split TCP](#splittcp).
+
+Front Door organizes its edge locations into primary and fallback *rings*. The outer ring has edge locations that are closer to users, offering lower latencies. The inner ring has edge locations that can handle the failover for the outer ring edge location in case any issues happen.
+
+The outer ring is the preferred target for all traffic, and the inner ring is designed to handle traffic overflow from the outer ring. Each frontend host or domain served by Front Door gets assigned primary and fallback VIPs (Virtual Internet Protocol addresses), which gets announced by edge locations in both the inner and outer ring.
+
+Front Door's architecture ensures that requests from your end users always reach the closest Front Door edge locations. If the preferred Front Door edge location is unhealthy, all traffic automatically moves to the next closest edge location.
+
+## <a name = "splittcp"></a>Connect to the Front Door edge location (Split TCP)
+
+[Split TCP](https://en.wikipedia.org/wiki/Performance-enhancing_proxy) is a technique to reduce latencies and TCP problems by breaking a connection that would incur a high round-trip time into smaller pieces.
++
+Split TCP enables the client's TCP connection to terminate inside a Front Door edge location close to the user. A separate TCP connection is established to the origin, and this separate connection might have a large round-trip time (RTT).
+
+The diagram below illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the origin in Europe:
+
+![Diagram illustrating how Front Door uses a short TCP connection to the closest Front Door edge location to the user, and a longer TCP connection to the origin.](media/front-door-traffic-acceleration/split-tcp-standard-premium.png)
+
+Establishing a TCP connection requires 3-5 roundtrips from the client to the server. Front Door's architecture improves the performance of establishing the connection. The "short connection" between the end user and the Front Door edge location means the connection gets established over 3-5 short roundtrips instead of 3-5 long round trips, which results in saving latency. The "long connection" between the Front Door edge location and the origin can be pre-established and then reused across other end users requests save connectivity time. The effect of Split TCP is multiplied when establishing a SSL/TLS (Transport Layer Security) connection, because there are more round trips to secure a connection.
+++
+Split TCP enables the client's TCP connection to terminate inside a Front Door edge location close to the user. A separate TCP connection is established to the backend, and this separate connection might have a large round-trip time (RTT).
+
+The diagram below illustrates how three users, in different geographical locations, connect to a Front Door edge location close to their location. Front Door then maintains the longer-lived connection to the backend in Europe:
+
+![Diagram illustrating how Front Door uses a short TCP connection to the closest Front Door edge location to the user, and a longer TCP connection to the backend.](media/front-door-traffic-acceleration/split-tcp-classic.png)
+
+Establishing a TCP connection requires 3-5 roundtrips from the client to the server. Front Door's architecture improves the performance of establishing the connection. The "short connection" between the end user and the Front Door edge location means the connection gets established over 3-5 short roundtrips instead of 3-5 long round trips, which results in saving latency. The "long connection" between the Front Door edge location and the backend can be pre-established and then reused across other end users requests save connectivity time. The effect of Split TCP is multiplied when establishing a SSL/TLS (Transport Layer Security) connection, because there are more round trips to secure a connection.
++
+## Next steps
+
+- Learn about the [Front Door routing architecture](front-door-routing-architecture.md).
frontdoor Concept Route https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-route.md
Previously updated : 02/18/2021 Last updated : 01/12/2022
A Front Door Standard/Premium routing configuration is composed of two major par
> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> [!NOTE]
+> When you use the [Front Door rules engine](concept-rule-set.md), you can configure a rule to [override the origin group](concept-rule-set-actions.md#OriginGroupOverride) for a request. The origin group set by the rules engine overrides the routing process described in this article.
+ ### Incoming match (left-hand side) The following properties determine whether the incoming request matches the routing rule (or left-hand side):
Given that configuration, the following example matching table would result:
### Routing decision
-Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client. The next thing Azure Front Door Standard/Premium evaluates is whether or not you have a Rule Set for the matched routing rule. If there isn't a Rule Set defined, then the request gets forwarded to the backend pool as is. Otherwise, the Rule Set gets executed in the order as they're configured.
+Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client.
+
+Finally, Azure Front Door Standard/Premium evaluates whether or not you have a [rule set](concept-rule-set.md) for the matched routing rule. If there's no rule set defined, then the request gets forwarded to the origin group as-is. Otherwise, the rule sets get executed in the order they're configured. [Rule sets can override the route](concept-rule-set-actions.md#OriginGroupOverride), forcing traffic to a specific origin group.
## Next steps
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/overview.md
Previously updated : 02/18/2021 Last updated : 01/27/2022
> [!IMPORTANT] > This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
-Azure Front Door Standard/Premium is a fast, reliable, and secure modern cloud CDN that uses the Microsoft global edge network and integrates with intelligent threat protection. It combines the capabilities of Azure Front Door, Azure Content Delivery Network (CDN) standard, and Azure Web Application Firewall (WAF) into a single secure cloud CDN platform.
+Azure Front Door Standard/Premium is a fast, reliable, and secure modern cloud Content Delivery Network (CDN) that uses the Microsoft global edge network and integrates with intelligent threat protection. It combines the capabilities of Azure Front Door, Azure CDN standard, and Azure Web Application Firewall (WAF) into a single secure cloud CDN platform.
With Azure Front Door Standard/Premium, you can transform your global consumer and enterprise applications into secure and high-performing personalized modern applications with contents that reach a global audience at the network edge close to the user. It also enables your application to scale out without warm-up while benefitting from the global HTTP load balancing with instant failover. :::image type="content" source="../media/overview/front-door-overview.png" alt-text="Azure Front Door Standard/Premium architecture" lightbox="../media/overview/front-door-overview-expanded.png":::
-Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your customized routing method using rules set, you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application origin is any Internet-facing service hosted inside or outside of Azure. Azure Front Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
+Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer), by using anycast with split TCP and Microsoft's global network to improve global connectivity. Based on your customized routing method using rules set, you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application origin is any Internet-facing service hosted inside or outside of Azure. Azure Front Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
-Azure Front Door also protects your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
+Azure Front Door also protects your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you MicrosoftΓÇÖs best-in-practice security at global scale.
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
Azure Front Door also protects your app at the edges with integrated Web Applica
## Why use Azure Front Door Standard/Premium (Preview)?
-Azure Front Door Standard/Premium provides a single unified platform which caters to both dynamic and static acceleration with built in turnkey security integration, and a simple and predictable pricing model. Front Door also enables you to define, manage, and monitor the global routing for your app.
+Azure Front Door Standard/Premium provides a single unified platform, which caters to both dynamic and static acceleration with built in turnkey security integration, and a simple and predictable pricing model. Front Door also enables you to define, manage, and monitor the global routing for your app.
Key features included with Azure Front Door Standard/Premium (Preview): -- Accelerated application performance by using **[split TCP-based](../front-door-routing-architecture.md#splittcp)** anycast protocol.
+- Accelerate application performance by using [anycast](../front-door-traffic-acceleration.md?pivots=front-door-standard-premium#anycast) and **[split TCP connections](../front-door-traffic-acceleration.md?pivots=front-door-standard-premium#splittcp)**.
-- Intelligent **[health probe](../front-door-health-probes.md)** monitoring and load balancing among **[origins](concept-origin.md)**.
+- Load balance across **[origins](concept-origin.md)** and use intelligent **[health probe](../front-door-health-probes.md)** monitoring.
- Define your own **[custom domain](how-to-add-custom-domain.md)** with flexible domain validation. -- Application security with integrated **[Web Application Firewall (WAF)](../../web-application-firewall/afds/afds-overview.md)**.
+- Secure applications with integrated **[Web Application Firewall (WAF)](../../web-application-firewall/afds/afds-overview.md)**.
-- SSL offload and integrated **[certificate management](how-to-configure-https-custom-domain.md)**.
+- Perform SSL offload and use integrated **[certificate management](how-to-configure-https-custom-domain.md)**.
-- Secure your origins with **[Private Link](concept-private-link.md)**.
+- Secure your origins with **[Private Link](concept-private-link.md)**.
-- Customizable traffic routing and optimizations via **[Rule Set](concept-rule-set.md)**.
+- Customize traffic routing and optimizations via **[Rule Sets](concept-rule-set.md)**.
-- **[Built-in reports](how-to-reports.md)** with all-in-one dashboard for both Front Door and security patterns.
+- Analyze **[built-in reports](how-to-reports.md)** with an all-in-one dashboard for both Front Door and security patterns.
-- **[Real-time monitoring](how-to-monitor-metrics.md)** and alerts that integrate with Azure Monitoring.
+- **[Monitoring your Front Door traffic in real time](how-to-monitor-metrics.md)**, and configure alerts that integrate with Azure Monitor.
-- **[Logging](how-to-logs.md)** for each Front Door request and failed health probes.
+- **[Log each Front Door request](how-to-logs.md)** and failed health probes.
-- Native support of end-to-end IPv6 connectivity and HTTP/2 protocol.
+- Natively support end-to-end IPv6 connectivity and the HTTP/2 protocol.
## Pricing
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Previously updated : 12/09/2021 Last updated : 1/27/2022
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
Last updated 01/03/2020
# Client application registration for Azure API for FHIR
-In the previous tutorial, you deployed and set up your Azure API for FHIR. Now that you have your Azure API for FHIR setup, we will register a public client application. You can read through the full [register a public client app](register-public-azure-ad-client-app.md) how-to guide for more details or troubleshooting, but we have called out the major steps for this tutorial below.
+In the previous tutorial, you deployed and set up your Azure API for FHIR. Now that you have your Azure API for FHIR setup, weΓÇÖll register a public client application. You can read through the full [register a public client app](register-public-azure-ad-client-app.md) how-to guide for more details or troubleshooting, but weΓÇÖve called out the major steps for this tutorial in this article.
1. Navigate to Azure Active Directory 1. Select **App Registration** --> **New Registration**
In the previous tutorial, you deployed and set up your Azure API for FHIR. Now t
## Client application settings
-Once your client application is registered, copy the Application (client) ID and the Tenant ID from the Overview Page. You will need these two values later when accessing the client.
+Once your client application is registered, copy the Application (client) ID and the Tenant ID from the Overview Page. YouΓÇÖll need these two values later when accessing the client.
:::image type="content" source="media/tutorial-web-app/client-id-tenant-id.png" alt-text="Screenshot of the client application settings pane, with the application and directory IDs highlighted."::: ### Connect with web app
-If you have [written your web app](tutorial-web-app-write-web-app.md) to connect with the Azure API for FHIR, you also need to set the correct authentication options.
+If youΓÇÖve [written your web app](tutorial-web-app-write-web-app.md) to connect with the Azure API for FHIR, you also need to set the correct authentication options.
1. In the left menu, under **Manage**, select **Authentication**. 1. To add a new platform configuration, select **Web**.
-1. Set up the redirect URI in preparation for when you create your web application in the fourth part of this tutorial. To do this, add `https://\<WEB-APP-NAME>.azurewebsites.net` to the redirect URI list. If you choose a different name during the step where you [write your web app](tutorial-web-app-write-web-app.md), you will need to come back and update this.
+1. Set up the redirect URI in preparation for when you create your web application in the fourth part of this tutorial. To do this, add `https://\<WEB-APP-NAME>.azurewebsites.net` to the redirect URI list. If you choose a different name during the step where you [write your web app](tutorial-web-app-write-web-app.md), youΓÇÖll need to come back and update this.
1. Select the **Access Token** and **ID token** check boxes.
If you have [written your web app](tutorial-web-app-write-web-app.md) to connect
## Add API permissions
-Now that you have setup the correct authentication, set the API permissions:
+Now that you have set up the correct authentication, set the API permissions:
-1. Select **API permissions** and click **Add a permission**.
+1. Select **API permissions** and select **Add a permission**.
1. Under **APIs my organization uses**, search for Azure Healthcare APIs.
-1. Select **user_impersonation** and click **add permissions**.
+1. Select **user_impersonation** and select **add permissions**.
:::image type="content" source="media/tutorial-web-app/api-permissions.png" alt-text="Screenshot of the Add API permissions blade, with the steps to add API permissions highlighted."::: ## Next Steps
-You now have a public client application. In the next tutorial, we will walk through testing and gaining access to this application through Postman.
+You now have a public client application. In the next tutorial, weΓÇÖll walk through testing and gaining access to this application through Postman.
>[!div class="nextstepaction"] >[Test client application in Postman](tutorial-web-app-test-postman.md)
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
Last updated 01/03/2020
# Write Azure web application to read FHIR data in Azure API for FHIR
-Now that you are able to connect to your FHIR server and POST data, you are ready to write a web application that will read FHIR data. In this final step of the tutorial, we will walk through writing and accessing the web application.
+Now that you're able to connect to your FHIR server and POST data, youΓÇÖre ready to write a web application that will read FHIR data. In this final step of the tutorial, weΓÇÖll walk through writing and accessing the web application.
## Create web application In Azure, select **Create a resource** and select **Web App**. Make sure to name your web application whatever you specified in the redirect URI for your client application or go back and update the redirect URI with the new name. ![Create Web Application](media/tutorial-web-app/create-web-app.png)
-Once the web application is available, **Go to resource**. Select **App Service Editor (Preview)** under Development Tools on the right and then select **Go**. Selecting Go will open up the App Service Editor. Right click in the grey space under *Explore* and create a new file called **https://docsupdatetracker.net/index.html**.
+Once the web application is available, **Go to resource**. Select **App Service Editor (Preview)** under Development Tools on the right and then select **Go**. Selecting Go will open up the App Service Editor. Right select in the grey space under *Explore* and create a new file called **https://docsupdatetracker.net/index.html**.
-Below is the code that you can input into **https://docsupdatetracker.net/index.html**. You will need to update the following items:
+Included is the code that you can input into **https://docsupdatetracker.net/index.html**. YouΓÇÖll need to update the following items:
* **clientId** - Update with your client application ID. This ID will be the same ID you pulled when retrieving your token * **authority** - Update with your Azure AD tenant ID * **FHIRendpoint** - Update the FHIRendpoint to have your FHIR service name
Below is the code that you can input into **https://docsupdatetracker.net/index.html**. You will need to updat
</html> ```
-From here, you can go back to your web application resource and open the URL found on the Overview page. Log in to see the patient James Tiberious Kirk that you previously created.
+From here, you can go back to your web application resource and open the URL found on the Overview page. Sign in to see the patient James Tiberious Kirk that you previously created.
## Next Steps
-You have successfully deployed the Azure API for FHIR, registered a public client application, tested access, and created a small web application. Check out the Azure API for FHIR supported features as a next step.
+YouΓÇÖve successfully deployed the Azure API for FHIR, registered a public client application, tested access, and created a small web application. Check out the Azure API for FHIR supported features as a next step.
>[!div class="nextstepaction"] >[Supported Features](fhir-features-supported.md)
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
--+ Last updated 01/06/2022
Last updated 01/06/2022
[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have FHIR interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence.
-Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that are not immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
+Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
This tutorial describes how to use the proxy to enable SMART on FHIR applications with the Azure API for FHIR.
This tutorial describes how to use the proxy to enable SMART on FHIR application
SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-You will also need a client application registration. Most SMART on FHIR applications are single-page JavaScript applications. So you should follow the instructions for configuring a [public client application in Azure AD](register-public-azure-ad-client-app.md).
+You'll also need a client application registration. Most SMART on FHIR applications are single-page JavaScript applications. So you should follow the instructions for configuring a [public client application in Azure AD](register-public-azure-ad-client-app.md).
After you complete these steps, you should have:
To use SMART on FHIR, you must first authenticate and authorize the app. The fir
If you don't have an ownership role in the app, contact the app owner and ask them to grant admin consent for you in the app.
-If you do have administrative privileges, complete the following steps to grant admin consent to yourself directly. (You also can grant admin consent to yourself later when you are prompted in the app.) You can complete the same steps to add other users as owners, so they can view and edit this app registration.
+If you do have administrative privileges, complete the following steps to grant admin consent to yourself directly. (You also can grant admin consent to yourself later when you're prompted in the app.) You can complete the same steps to add other users as owners, so they can view and edit this app registration.
To add yourself or another user as owner of an app:
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
For example:
`POST https://myAzureAPIforFHIR.azurehealthcareapis.com/Patient/$validate`
-This request will create the new resource you are specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
+This request will create the new resource you're specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
## Validate on resource CREATE or resource UPDATE
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/patient-everything.md
Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-ty
As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients. > [!Note]
-> This is set up to only follow `seealso` links one **layer deep**. It doesn't process a `seealso` link's `seealso` links.
+> This is set up to only follow `seealso` links one layer deep. It doesn't process a `seealso` link's `seealso` links.
[![See also flow diagram.](media/patient-everything/see-also-flow.png)](media/patient-everything/see-also-flow.png#lightbox)
hpc-cache Prime Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/prime-cache.md
+
+ Title: Pre-load files in Azure HPC Cache (Preview)
+description: Use the cache priming feature (preview) to populate or preload cache contents before files are requested
+++ Last updated : 01/26/2022+++
+# Pre-load files in Azure HPC Cache (preview)
+
+> [!IMPORTANT]
+> Cache priming is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure HPC CacheΓÇÖs priming feature (preview) allows customers to pre-load files in the cache.
+
+You can use this feature to fetch your expected working set of files and populate the cache before work begins. This technique is sometimes called cache warming.
+
+Priming the cache improves performance by increasing cache "hits". If a client machine requests a resource that must be read from the back-end storage, the latency for fetching and returning that file can be significant - especially if the storage system is an on-premises NAS. If you prime the cache with the files it will need before you start a compute task, file requests will be more efficient during the job.
+
+This feature uses a JSON manifest file to specify which files to load. Each priming job takes one manifest file.
+
+Create cache priming jobs by using the Azure portal, or with the [Azure REST API endpoints](#azure-rest-apis) mentioned at the end of this document.
+
+You can create up to 10 priming jobs. Depending on the size of the cache, between three and 10 priming jobs can run at the same time; others are queued until resources free up.
+
+## Setup and prerequisites
+
+Before you can create a priming job, take these steps:
+
+1. Create an Azure HPC Cache. (Refer to [Create an Azure HPC Cache](hpc-cache-create.md) for help.)
+1. Define at least one storage target, including creating its aggregated namespace path (or paths). [Storage target documentation](hpc-cache-add-storage.md)
+1. Create the priming job manifest (instructions [below](#create-a-priming-manifest-file)) and store it in a Blob container that is accessible to the HPC Cache. Or, if using Azure REST APIs to create your priming jobs, you can store the manifest at any URL that your HPC Cache can access.
+
+## Create a priming manifest file
+
+The priming manifest is a JSON file that defines what content will be preloaded in the cache when the priming job runs.
+
+In the manifest, specify the namespace path to the directories or files that you want to pre-load. You also can configure include and exclude rules to customize which content is loaded.
+
+### Sample priming manifest
+
+```json
+
+{
+ "config": {
+ "cache_mode": "0",
+ "maxreadsize": "0",
+ "resolve_symlink": "0"
+ },
+
+ "files": [
+ "/bin/tool.py",
+ "/bin/othertool.py"
+ ],
+
+ "directories": [
+ {"path": "/lib/toollib"},
+ {
+ "path": "/lib/otherlib",
+ "include": ["\\.py$"]
+ },
+ {
+ "path": "/lib/xsupport",
+ "include": ["\\.py$"],
+ "exclude": ["\\.elc$", "\\.pyc$"]
+ }
+ ],
+
+ "include": ["\\.txt$"],
+ "exclude": ["~$", "\\.bak$"]
+}
+```
+
+There are three sections to the priming manifest file:
+
+* `config` - settings for the priming job
+* `files` - individual files that will be pre-loaded
+* `directories` - file paths that will be pre-loaded
+* `include` and `exclude` - regular expression strings that modify the directory priming task
+
+### Configuration settings
+
+There are three settings in the `config` section of the manifest file:
+
+* Cache mode - Sets the behavior of the priming job. Options are:
+
+ * 0 - Data - Load all specified file data and attributes in the cache. This is the default.
+ * 1 - Metadata - Load only the file attributes.
+ * 2 - Estimate - Load the file attributes, and also return an estimated number of the files, directories, and total data size (in bytes) that would be primed if the content in this manifest was run in Data mode.
+
+* `maxreadsize` - Sets the maximum number of bytes that will be pre-loaded per file. Leave this set to 0 (the default) to always load the entire file regardless of size.
+
+* `resolve_symlink` - Set this to true (`1`) if you want to resolve symbolic links when priming. If `resolve_symlink` is enabled, symbolic link targets are pre-loaded entirely, regardless of include and exclude rules.
+
+### File and directory paths
+
+The `files` and `directories` sections of the manifest specify which files are pre-loaded during the priming job.
+
+Specify files and directories by their cache namespace paths. These are the same paths that clients use to access the files through the HPC Cache, and they do not need to be the same as the storage system paths or export names. Read [Plan the aggregated namespace](hpc-cache-namespace.md) to learn more.
+
+Start paths from the root of the cache namespace.
+
+> [!NOTE]
+> Items listed in `files` are included even if they match later exclude rules.
+
+The `directories` value holds a list of paths that are assessed for content to pre-load in the cache. All subtrees are included in the priming job unless theyΓÇÖre specifically excluded.
+
+Directory path values can have their own include and exclude statements, which apply only to the path theyΓÇÖre defined with. For example, the line `"directories": [{"path": "/cache/library1", "exclude": "\\.bak$"}]` would pre-load all files under the namespace path /cache/library1/ except for files in that path that end in `.bak`.
+
+Directory-level include/exclude statements are not the same as the global include and exclude statements described [below](#include-and-exclude-statements). Be careful to read the details about how directory-level statements interact with global include and exclude statements.
+
+> [!NOTE]
+> Because of the way the manifest file is parsed, two escape characters are needed to protect a problematic string character in include and exclude statements. For example, use the expression `\\.txt` to match .txt files. <!-- double-check the \\.txt formatting, some interpretations might drop a \ -->
+
+### Include and exclude statements
+
+After the files and directories, you can specify global `include` and `exclude` statements. These global settings apply to all directories. They do not apply to files that were specified in a `files` statement.
+
+In general, rules are matched in order, so statements that appear earlier in the manifest file are applied before later ones. The descriptions in this article also assume that earlier rules have already been applied and not matched.
+
+* **Include** statements - When scanning directories, the priming job ignores any files that **do not match** the regular expressions in the `include` setting.
+
+* **Exclude** statements - When scanning directories, the priming job ignores any file that **matches** the regular expressions in the `exclude` setting.
+
+ More about how global exclude rules interact with other rules:
+
+ * Global exclude rules override global include rules. That is, if a file name matches both a global *include* expression and a global *exclude* expression, it will **not** be pre-loaded by the priming job.
+
+ * Directory-level include rules override global exclude rules.
+
+ A file name that matches both a *directory-level* ***include*** expression and a *global* ***exclude*** expression **will** be pre-loaded by the priming job.
+
+ * File statements override all exclude rules.
+
+You can omit include and exclude statements to prime all files in the directories.
+
+More information about include/exclude rules and how they match file names:
+
+* If a name matches an entry in the per-directory exclude list, it is skipped.
+
+* If there is a per-directory include list, the name is included or excluded depending on whether or not it appears in that list.
+
+* If a name matches an entry in the global exclude list, it is skipped.
+
+* If there is a global include list, the name is included if it appears on that list, or excluded if it does not appear on that list.
+
+* If there is a per-directory include list, the name is excluded. Otherwise, it is included.
+
+* If a directory and an ancestor of that directory both appear in the directories list, the specific rules for the directory are applied along with the global rules and the rules for the ancestor directory are ignored.
+
+* Names and rules are case sensitive. Case-insensitive sources are not supported. <!-- this might change in future? -->
+
+* The total number of file rules plus directory rules may not exceed 4000. The number of regular expression rules for any include/exclude list may not exceed 5.
+
+* If one directory specification overlaps another, the one with the more explicit path takes precedence.
+
+* It is an error for a manifest to specify the same path more than once in the file list or directory list.
+
+### Upload the priming manifest file
+
+When your manifest file is ready, upload it to an Azure blob container in a storage account accessible from your HPC Cache. If using APIs instead of the portal to create your priming jobs, you have the option to store it on another webserver, but you need to take different steps to make sure the cache can access it.
+
+* If you create a priming job from the Azure portal, select the manifest file in the HPC Cache **Prime cache** settings page as described below. Selecting it from the cache settings automatically creates a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) that gives the cache limited access to the priming file.
+
+* If you use APIs to create the priming job instead of using the portal, make sure that the cache is authorized to access that file. Either store the file in an accessible location (for example, on a webserver you control that is inside your cache or storage network), or manually create a SAS URL for your priming file.
+
+ Read [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md) to learn how to create an Account SAS URL for your priming manifest file. The manifest file must be accessible with HTTPS.
+
+The cache accesses the manifest file once when the priming job starts. The SAS URL generated for the cache is not exposed.
+
+## Create a priming job
+
+Use the Azure portal to create a priming job. View your Azure HPC Cache in the portal and select the **Prime cache** page under the **Settings** heading.
+
+![screenshot of the Priming page in the portal, with several completed jobs.](media/priming-preview.png)
+<!-- to do: screenshot with 'preview' on GUI heading, screenshot with more diverse jobs and statuses -->
+
+Click the **Add priming job** text at the top of the table to define a new job.
+
+In the **Job name** field, type a unique name for the priming job.
+
+Use the **Priming file** field to select your priming manifest file. Select the storage account, container, and file where your priming manifest is stored.
+
+![screenshot of the Add priming job page, with a job name and priming file path filled in. Below the Priming file field is a link labeled "Select from existing blob location".](media/create-priming-job.png)
+
+To select the priming manifest file, click the link to select a storage target. Then select the container where your .json manifest file is stored.
+
+If you canΓÇÖt find the manifest file, your cache might not be able to access the file's container. Make sure that the cache has network connectivity to the storage account and is able to read data from the container.
+
+## Manage priming jobs
+
+Priming jobs are listed in the **Prime cache** page in the Azure portal.
+
+![screenshot of the priming jobs list in the portal, with jobs in various states (running, paused, and success). The cursor has clicked the ... symbol at the right side of one job's row, and a context menu shows options to pause or resume.](media/prime-cache-list.png)
+
+This page shows each job's name, its state, its current status, and summary statistics about the priming progress. The summary in the **Details** column updates periodically as the job progresses.
+
+Click the **...** section at the right of the table to pause or resume a priming job.
+
+To delete a priming job, select it in the list and use the delete control at the top of the table.
+
+## Azure REST APIs
+
+You can use these REST API endpoints to create an HPC Cache priming job. These are part of the `2021-10-01-preview` version of the REST API, so make sure you use that string in the *api_version* term.
+
+Read the [Azure REST API reference](/rest/api/azure/) to learn how to use this interface.
+
+### Add a priming job
+
+```rest
+
+URL: POST
+
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/addPrimingJob?api-version=2021-10-01-preview
+
+ BODY:
+ {
+ "primingJobName": "MY-PRIMING-JOB",
+ "primingManifestUrl": "MY-JSON-MANIFEST-FILE-URL"
+ }
+
+```
+
+For the `primingManifestUrl` value, pass the fileΓÇÖs SAS URL or other HTTPS URL that is accessible to the cache. Read [Upload the priming manifest file](#upload-the-priming-manifest-file) to learn more.
+
+### Remove a priming job
+
+```rest
+
+URL: POST
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME/removePrimingJob/MY-JOB-ID-TO-REMOVE?api-version=2021-10-01-preview
+
+BODY:
+```
+
+### Get priming jobs
+
+Use the `Get cache` API to list a cacheΓÇÖs priming jobs. This API returns a lot of information about the cache; look for priming job information in the "cache properties" section.
+
+Priming job names and IDs are returned, along with other information.
+
+```rest
+
+URL: GET
+ https://MY-ARM-HOST/subscriptions/MY-SUBSCRIPTION-ID/resourceGroups/MY-RESOURCE-GROUP-NAME/providers/Microsoft.StorageCache/caches/MY-CACHE-NAME?api-version=2021-10-01-preview
+
+BODY:
+
+```
+
+## Frequently asked questions
+
+* Can I reuse a priming job?
+
+ Not exactly, because each priming job in the list must have a unique name. After you delete a priming job from the list, you can create a new job with the same name.
+
+ You **can** create multiple priming jobs that reference the same manifest file.
+
+* How long does a failed or completed priming job stay in the list?
+
+ Priming jobs persist in the list until you delete them. On the portal **Prime cache** page, check the checkbox next to the job and select the **Delete** control at the top of the list.
+
+* What happens if the content IΓÇÖm pre-loading is larger than my cache storage?
+
+ If the cache becomes full, files fetched later will overwrite files that were primed earlier.
+
+## Next steps
+
+* For help with HPC Cache priming (preview) or to report a problem, use the standard Azure support process, described in [Get help with Azure HPC Cache](hpc-cache-support-ticket.md).
+* Learn more about [Azure REST APIs](/rest/api/azure/)
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
DPS automates device provisioning with Azure IoT Hub. Learn more about [IoT Hub]
> [!NOTE] > Provisioning of nested edge devices (parent/child hierarchies) is not currently supported by DPS.
+IoT Central applications use an internal DPS instance to manage device connections. To learn more, see:
+
+* [Get connected to Azure IoT Central](../iot-central/core/concepts-get-connected.md)
+* [Tutorial: Create and connect a client application to your Azure IoT Central application](../iot-central/core/tutorial-connect-device.md)
+ ## Next steps You now have an overview of provisioning IoT devices in Azure. The next step is to try out an end-to-end IoT scenario.
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage certifica
description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the Python client library Previously updated : 09/03/2020 Last updated : 01/22/2022
This quickstart is using Azure Identity library with Azure CLI to authenticate u
[!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-python-qs-rg-kv-creation.md)]
+### Set the KEY_VAULT_NAME environmental variable
++ ### Grant access to your key vault Create an access policy for your key vault that grants certificate permission to your user account ```console
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --certificate-permissions delete get list create
-```
-
-#### Set environment variables
-
-This application is using key vault name as an environment variable called `KEY_VAULT_NAME`.
-
-Windows
-```cmd
-set KEY_VAULT_NAME=<your-key-vault-name>
-````
-Windows PowerShell
-```powershell
-$Env:KEY_VAULT_NAME="<your-key-vault-name>"
-```
-
-macOS or Linux
-```cmd
-export KEY_VAULT_NAME=<your-key-vault-name>
+az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --certificate-permissions delete get list create
``` ## Create the sample code
If you want to also experiment with [secrets](../secrets/quick-create-python.md)
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources: ```azurecli
-az group delete --resource-group KeyVault-PythonQS-rg
+az group delete --resource-group myResourceGroup
``` ## Next steps
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/soft-delete-overview.md
Previously updated : 03/31/2021 Last updated : 01/25/2022 # Azure Key Vault soft-delete overview
You cannot reuse the name of a key vault that has been soft-deleted until the re
### Purge protection
-Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell).
+Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell). Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss.
When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed.
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage keys
description: Learn how to create, retrieve, and delete keys from an Azure key vault using the Python client library Previously updated : 09/03/2020 Last updated : 01/22/2022
This quickstart is using Azure Identity library with Azure CLI to authenticate u
[!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-python-qs-rg-kv-creation.md)]
+### Set the KEY_VAULT_NAME environmental variable
++ ### Grant access to your key vault Create an access policy for your key vault that grants secret permission to your user account. ```console
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --secret-permissions delete get list set
-```
-
-#### Set environment variables
-
-This application is using key vault name as an environment variable called `KEY_VAULT_NAME`.
-
-Windows
-```cmd
-set KEY_VAULT_NAME=<your-key-vault-name>
-````
-Windows PowerShell
-```powershell
-$Env:KEY_VAULT_NAME="<your-key-vault-name>"
-```
-
-macOS or Linux
-```cmd
-export KEY_VAULT_NAME=<your-key-vault-name>
+az keyvault set-policy --name <<your-unique-keyvault-name> --upn user@domain.com --secret-permissions delete get list set
``` ## Create the sample code
If you want to also experiment with [certificates](../certificates/quick-create-
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources: ```azurecli
-az group delete --resource-group KeyVault-PythonQS-rg
+az group delete --resource-group myResourceGroup
``` ## Next steps
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-template.md
# Quickstart: Create an Azure key vault and a key by using ARM template
-[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, certificates, and other secrets. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a key vault and a key.
+[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, and certificate. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a key vault and a key.
## Prerequisites
To complete this article:
(Get-AzADUser -UserPrincipalName $upn).Id Write-Host "Press [ENTER] to continue..." ```-
- 1. Write down the object ID. You need it in the next section of this quickstart.
+ Write down the object ID. You need it in the next section of this quickstart.
+ ## Review the template
More Azure Key Vault template samples can be found in [Azure Quickstart Template
|Parameter |Definition | |||
-|**keyOps** | Specifies operations that can be performed by using the key. If you do not specify this parameter, all operations can be performed. The acceptable values for this parameter are a comma-separated list of key operations as defined by the [JSON Web Key (JWK) specification](https://tools.ietf.org/html/draft-ietf-jose-json-web-key-41): <br> `["sign", "verify", "encrypt", "decrypt", " wrapKey", "unwrapKey"]` |
-|**CurveName** | Elliptic curve name for EC key type. See [JsonWebKeyCurveName](/rest/api/keyvault/createkey/createkey#jsonwebkeycurvename) |
+|**keyOps** | Specifies operations that can be performed by using the key. If you don't specify this parameter, all operations can be performed. The acceptable values for this parameter are a comma-separated list of key operations as defined by the [JSON Web Key (JWK) specification](https://tools.ietf.org/html/draft-ietf-jose-json-web-key-41): <br> `["sign", "verify", "encrypt", "decrypt", " wrapKey", "unwrapKey"]` |
+|**CurveName** | Elliptic curve (EC) name for EC key type. See [JsonWebKeyCurveName](/rest/api/keyvault/createkey/createkey#jsonwebkeycurvename) |
|**Kty** | The type of key to create. For valid values, see [JsonWebKeyType](/rest/api/keyvault/createkey/createkey#jsonwebkeytype) |
-|**Tags** | Application specific metadata in the form of key-value pairs. |
-|**nbf** | Specifies the time, as a DateTime object, before which the key cannot be used. The format would be Unix time stamp (the number of seconds after Unix Epoch on January 1st, 1970 at UTC). |
+|**Tags** | Application-specific metadata in the form of key-value pairs. |
+|**nbf** | Specifies the time, as a DateTime object, before which the key can't be used. The format would be Unix time stamp (the number of seconds after Unix Epoch on January 1st, 1970 at UTC). |
|**exp** | Specifies the expiration time, as a DateTime object. The format would be Unix time stamp (the number of seconds after Unix Epoch on January 1st, 1970 at UTC). | ## Deploy the template
You can use [Azure portal](../../azure-resource-manager/templates/deploy-portal.
## Review deployed resources
-You can either use the Azure portal to check the key vault and the key, or use the following Azure CLI or Azure PowerShell script to list the key created.
+You can use the Azure portal to check the key vault and the key. Alternatively, use the following Azure CLI or Azure PowerShell script to list the key created.
# [CLI](#tab/CLI)
$keyVaultName = Read-Host -Prompt "Enter your key vault name"
Get-AzKeyVaultKey -vaultName $keyVaultName Write-Host "Press [ENTER] to continue..." ```- ## Creating key using ARM template is different from creating key via data plane ### Creating a key via ARM-- It is only possible to create *new* keys. It is not possible to update existing keys, and it is not possible to create new versions of existing keys. If the key already exists, then the existing key is retrieved from storage and used (no write operations will occur).-- To be authorized to use this API, the caller needs to have the **"Microsoft.KeyVault/vaults/keys/write"** RBAC Action. The built-in "Key Vault Contributor" role is sufficient, since it authorizes all RBAC Actions which match the pattern "Microsoft.KeyVault/*". See the below screenshots for more information.
+- It's only possible to create *new* keys. It isn't possible to update existing keys, nor create new versions of existing keys. If the key already exists, then the existing key is retrieved from storage and used (no write operations will occur).
+- To be authorized to use this API, the caller needs to have the **"Microsoft.KeyVault/vaults/keys/write"** role-based access control (RBAC) Action. The built-in "Key Vault Contributor" role is sufficient, since it authorizes all RBAC Actions that match the pattern "Microsoft.KeyVault/*".
+ :::image type="content" source="../media/keys-quick-template-1.png" alt-text="Create a key via ARM 1":::
+ :::image type="content" source="../media/keys-quick-template-2.png" alt-text="Create a key via ARM 2":::
### Existing API (creating key via data plane)-- It is possible to create new keys, update existing keys, and create new versions of existing keys.-- To be authorized to use this API, the caller either needs to have the "create" key permission (if the vault uses access policies) or the "Microsoft.KeyVault/vaults/keys/create/action" RBAC DataAction (if the vault is enabled for RBAC).
+- It's possible to create new keys, update existing keys, and create new versions of existing keys.
+- The caller must be authorized to use this API. If the vault uses access policies, the caller must have "create" key permission; if the vault is enabled for RBAC, the caller must have "Microsoft.KeyVault/vaults/keys/create/action" RBAC DataAction.
## Clean up resources
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you created a key vault and a key using an ARM template, and validated the deployment. To learn more about Key Vault and Azure Resource Manager, continue on to the articles below.
+In this quickstart, you created a key vault and a key using an ARM template, and validated the deployment. To learn more about Key Vault and Azure Resource Manager, see these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage secrets
description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the Python client library Previously updated : 09/03/2020 Last updated : 01/22/2022
# Quickstart: Azure Key Vault secret client library for Python
-Get started with the Azure Key Vault secret client library for Python. Follow the steps below to install the package and try out example code for basic tasks. By using Key Vault to store secrets, you avoid storing secrets in your code, which increases the security of your app.
+Get started with the Azure Key Vault secret client library for Python. Follow these steps to install the package and try out example code for basic tasks. By using Key Vault to store secrets, you avoid storing secrets in your code, which increases the security of your app.
[API reference documentation](/python/api/overview/azure/keyvault-secrets-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-secrets) | [Package (Python Package Index)](https://pypi.org/project/azure-keyvault-secrets/) ## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment)-- [Azure CLI](/cli/azure/install-azure-cli)
+- [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment).
+- [Azure CLI](/cli/azure/install-azure-cli).
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) in a Linux terminal window.
## Set up your local environment
This quickstart is using Azure Identity library with Azure CLI to authenticate u
[!INCLUDE [Create a resource group and key vault](../../../includes/key-vault-python-qs-rg-kv-creation.md)]
+### Set the KEY_VAULT_NAME environmental variable
++ ### Grant access to your key vault Create an access policy for your key vault that grants secret permission to your user account. ```console
-az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --secret-permissions delete get list set
+az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --secret-permissions delete get list set
``` ## Create the sample code
python kv_secrets.py
In this quickstart, the logged in user is used to authenticate to key vault, which is the preferred method for local development. For applications deployed to Azure, a managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
-In the example below, the name of your key vault is expanded using the value of the "KVUri" variable, in the format: "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/python/api/azure-identity/azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/python/api/overview/azure/identity-readme).
+In this example, the name of your key vault is expanded using the value of the "KVUri" variable, in the format: "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/python/api/azure-identity/azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/python/api/overview/azure/identity-readme).
```python credential = DefaultAzureCredential()
If you want to also experiment with [certificates](../certificates/quick-create-
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources: ```azurecli
-az group delete --resource-group KeyVault-PythonQS-rg
+az group delete --resource-group myResourceGroup
``` ## Next steps
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multivip-overview.md
na Previously updated : 08/07/2019 Last updated : 01/26/2022
Azure Load Balancer allows you to load balance services on multiple ports, multi
This article describes the fundamentals of this ability, important concepts, and constraints. If you only intend to expose services on one IP address, you can find simplified instructions for [public](./quickstart-load-balancer-standard-public-portal.md) or [internal](./quickstart-load-balancer-standard-internal-portal.md) load balancer configurations. Adding multiple frontends is incremental to a single frontend configuration. Using the concepts in this article, you can expand a simplified configuration at any time.
-When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with rules. The health probe referenced by the rule is used to determine how new flows are sent to a node in the backend pool. The frontend (aka VIP) is defined by a 3-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
+When you define an Azure Load Balancer, a frontend and a backend pool configuration are connected with rules. The health probe referenced by the rule is used to determine how new flows are sent to a node in the backend pool. The frontend (also known as VIP) is defined by a 3-tuple comprised of an IP address (public or internal), a transport protocol (UDP or TCP), and a port number from the load balancing rule. The backend pool is a collection of Virtual Machine IP configurations (part of the NIC resource) which reference the Load Balancer backend pool.
The following table contains some example frontend configurations:
The Floating IP rule type is the foundation of several load balancer configurati
## Limitations
-* Multiple frontend configurations are only supported with IaaS VMs.
+* Multiple frontend configurations are only supported with IaaS VMs and virtual machine scale sets.
* With the Floating IP rule, your application must use the primary IP configuration for outbound SNAT flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound SNAT is not available to rewrite the outbound flow and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md). * Floating IP is not currently supported on secondary IP configurations for Internal Load Balancing scenarios. * Public IP addresses have an effect on billing. For more information, see [IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/)
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-standard-diagnostics.md
Title: Diagnostics with metrics, alerts, and resource health - Azure Standard Load Balancer
-description: Use the available metrics, alerts, and resource health information to diagnose your Azure Standard Load Balancer.
-
+ Title: Diagnostics with metrics, alerts, and resource health
+
+description: Use the available metrics, alerts, and resource health information to diagnose your load balancer.
- Previously updated : 12/14/2021 Last updated : 01/26/2022
-# Standard Load Balancer diagnostics with metrics, alerts and resource health
+# Standard load balancer diagnostics with metrics, alerts, and resource health
-Azure Standard Load Balancer exposes the following diagnostic capabilities:
+Azure Load Balancer exposes the following diagnostic capabilities:
* **Multi-dimensional metrics and alerts**: Provides multi-dimensional diagnostic capabilities through [Azure Monitor](../azure-monitor/overview.md) for standard load balancer configurations. You can monitor, manage, and troubleshoot your standard load balancer resources.
-* **Resource health**: The Resource Health status of your Load Balancer is available in the Resource Health page under Monitor. This automatic check informs you of the current availability of your Load Balancer resource.
-This article provides a quick tour of these capabilities, and it offers ways to use them for Standard Load Balancer.
+* **Resource health**: The Resource Health status of your load balancer is available in the **Resource health** page under **Monitor**. This automatic check informs you of the current availability of your load balancer resource.
+
+This article provides a quick tour of these capabilities, and it offers ways to use them for a standard load balancer.
## <a name = "MultiDimensionalMetrics"></a>Multi-dimensional metrics Azure Load Balancer provides multi-dimensional metrics via the Azure Metrics in the Azure portal, and it helps you get real-time diagnostic insights into your load balancer resources.
-The various Standard Load Balancer configurations provide the following metrics:
+The various load balancer configurations provide the following metrics:
| Metric | Resource type | Description | Recommended aggregation | | | | | |
-| Data path availability | Public and internal load balancer | Standard Load Balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your VM. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customers use is also validated. The measurement is invisible to your application and does not interfere with other operations.| Average |
-| Health probe status | Public and internal load balancer | Standard Load Balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance endpoint in the load balancer pool. You can see how Load Balancer views the health of your application, as indicated by your health probe configuration. | Average |
-| SYN (synchronize) count | Public and internal load balancer | Standard Load Balancer does not terminate Transmission Control Protocol (TCP) connections or interact with TCP or UDP packet flows. Flows and their handshakes are always between the source and the VM instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received.| Sum |
-| SNAT connection count | Public load balancer |Standard Load Balancer reports the number of outbound flows that are masqueraded to the Public IP address front end. Source network address translation (SNAT) ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported and can be used to troubleshoot and understand the health of your outbound flows.| Sum |
-| Allocated SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports allocated per backend instance | Average. |
-| Used SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports that are utilized per backend instance. | Average |
-| Byte count | Public and internal load balancer | Standard Load Balancer reports the data processed per front end. You may notice that the bytes are not distributed equally across the backend instances. This is expected as Azure's Load Balancer algorithm is based on flows | Sum |
-| Packet count | Public and internal load balancer | Standard Load Balancer reports the packets processed per front end.| Sum |
+| Data path availability | Public and internal load balancer | A standard load balancer continuously uses the data path from within a region to the load balancer frontend, to the network that supports your VM. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path in use is validated. The measurement is invisible to your application and doesnΓÇÖt interfere with other operations.| Average |
+| Health probe status | Public and internal load balancer | A standard load balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance endpoint in the load balancer pool. You can see how load balancer views the health of your application, as indicated by your health probe configuration. | Average |
+| SYN (synchronize) count | Public and internal load balancer |A standard load balancer doesnΓÇÖt terminate Transmission Control Protocol (TCP) connections or interact with TCP or User Data-gram Packet (UDP) flows. Flows and their handshakes are always between the source and the VM instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received.| Sum |
+| Source Network Address Translation (SNAT) connection count | Public load balancer | A standard load balancer reports the number of outbound flows that are masqueraded to the Public IP address frontend. SNAT ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported. The counters can be used to troubleshoot and understand the health of your outbound flows.| Sum |
+| Allocated SNAT ports | Public load balancer | A standard load balancer reports the number of SNAT ports allocated per backend instance | Average. |
+| Used SNAT ports | Public load balancer | A standard load balancer reports the number of SNAT ports that are utilized per backend instance. | Average |
+| Byte count | Public and internal load balancer | A standard load balancer reports the data processed per front end. You may notice that the bytes arenΓÇÖt distributed equally across the backend instances. This is expected as the Azure Load Balancer algorithm is based on flows | Sum |
+| Packet count | Public and internal load balancer | A standard load balancer reports the packets processed per front end.| Sum |
>[!NOTE]
- >When using distributing traffic from an internal load balancer through an NVA or firewall Syn Packet, Byte Count, and Packet Count metrics are not be available and will show as zero.
-
- >[!NOTE]
- >Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics
-
+ >When using distributing traffic from an internal load balancer through an NVA or firewall syn packet, byte count, and packet count metrics are not be available and will show as zero.
+ >
+ >Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics.
+
### View your load balancer metrics in the Azure portal
-The Azure portal exposes the load balancer metrics via the Metrics page, which is available on both the load balancer resource page for a particular resource and the Azure Monitor page.
+The Azure portal exposes the load balancer metrics via the Metrics page. This page is available on both the load balancer's resource page for a particular resource and the Azure Monitor page.
+
+To view the metrics for your standard load balancer resources:
-To view the metrics for your Standard Load Balancer resources:
-1. Go to the Metrics page and do either of the following:
- * On the load balancer resource page, select the metric type in the drop-down list.
+1. Go to the metrics page and do either of the following tasks:
+
+ * On the load balancer's resource page, select the metric type in the drop-down list.
+
* On the Azure Monitor page, select the load balancer resource.+ 2. Set the appropriate metric aggregation type.+ 3. Optionally, configure the required filtering and grouping.+ 4. Optionally, configure the time range and aggregation. By default time is displayed in UTC. >[!NOTE]
- >Time aggregation is important when interpreting certain metrics as data is sampled once per minute. If time aggregation is set to five minutes and metric aggregation type Sum is used for metrics such as SNAT Allocation, your graph will display five times the total allocated SNAT ports.
+ >Time aggregation is important when interpreting certain metrics as data is sampled once per minute. If time aggregation is set to five minutes and metric aggregation type Sum is used for metrics such as SNAT allocation, your graph will display five times the total allocated SNAT ports.
> >Recommendation: When analyzing metric aggregation type Sum and Count, we recommend using a time aggregation value that is greater than one minute.
-![Metrics for Standard Load Balancer](./media/load-balancer-standard-diagnostics/lbmetrics1anew.png)
-*Figure: Data Path Availability metric for Standard Load Balancer*
+*Figure: Metric for data path availability for a standard load balancer*
### Retrieve multi-dimensional metrics programmatically via APIs
-For API guidance for retrieving multi-dimensional metric definitions and values, see [Azure Monitoring REST API walkthrough](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-definitions). These metrics can be written to a storage account by adding a [Diagnostic Setting](../azure-monitor/essentials/diagnostic-settings.md) for the 'All Metrics' category.
+For API guidance for retrieving multi-dimensional metric definitions and values, see [Azure Monitoring REST API walkthrough](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-definitions). These metrics can be written to a storage account by adding a [diagnostic setting](../azure-monitor/essentials/diagnostic-settings.md) for the 'All Metrics' category.
### <a name = "DiagnosticScenarios"></a>Common diagnostic scenarios and recommended views
-#### Is the data path up and available for my Load Balancer Frontend?
+#### Is the data path up and available for my load balancer frontend?
+ <details><summary>Expand</summary>
-The data path availability metric describes the health of the data path within the region to the compute host where your VMs are located. The metric is a reflection of the health of the Azure infrastructure. You can use the metric to:
-- Monitor the external availability of your service-- Dig deeper and understand whether the platform on which your service is deployed is healthy or whether your guest OS or application instance is healthy.-- Isolate whether an event is related to your service or the underlying data plane. Do not confuse this metric with the health probe status ("Backend Instance availability").
+The metric for data path availability describes the health within the region of the data path to the compute host where your VMs are located. The metric is a reflection of the health of the Azure infrastructure. You can use the metric to:
+
+- Monitor the external availability of your service.
+
+- Investigate the platform where your service is deployed and determine if it's healthy. Determine if your guest OS or application instance is healthy.
+
+- Isolate whether an event is related to your service or the underlying data plane. DonΓÇÖt confuse this metric with the health probe status ("Backend instance availability").
+
+To get the data path availability for your standard load balancer resources:
-To get the Data Path Availability for your Standard Load Balancer resources:
1. Make sure the correct load balancer resource is selected. + 2. In the **Metric** drop-down list, select **Data Path Availability**. + 3. In the **Aggregation** drop-down list, select **Avg**.
-4. Additionally, add a filter on the Frontend IP address or Frontend port as the dimension with the required front-end IP address or front-end port, and then group them by the selected dimension.
-![VIP probing](./media/load-balancer-standard-diagnostics/LBMetrics-VIPProbing.png)
+4. Additionally, add a filter on the frontend IP address or frontend port as the dimension with the required front-end IP address or front-end port, and then group them by the selected dimension.
+
-*Figure: Load Balancer Frontend probing details*
+*Figure: Load balancer frontend probing details*
The metric is generated by an active, in-band measurement. A probing service within the region originates traffic for the measurement. The service is activated as soon as you create a deployment with a public front end, and it continues until you remove the front end.
-A packet matching your deployment's front end and rule is generated periodically. It traverses the region from the source to the host where a VM in the back-end pool is located. The load balancer infrastructure performs the same load balancing and translation operations as it does for all other traffic. This probe is in-band on your load-balanced endpoint. After the probe arrives on the compute host, where a healthy VM in the back-end pool is located, the compute host generates a response to the probing service. Your VM does not see this traffic.
+A packet matching your deployment's front end and rule is generated periodically. It traverses the region from the source to the host where a VM in the back-end pool is located. The load balancer infrastructure performs the same load balancing and translation operations as it does for all other traffic. This probe is in-band on your load-balanced endpoint. After the probe arrives on the compute host, where a healthy VM in the back-end pool is located, the compute host generates a response to the probing service. Your VM doesnΓÇÖt see this traffic.
+
+Data path availability fails for the following reasons:
-Datapath availability fails for the following reasons:
- Your deployment has no healthy VMs remaining in the back-end pool. + - An infrastructure outage has occurred.
-For diagnostic purposes, you can use the [Data Path Availability metric together with the health probe status](#vipavailabilityandhealthprobes).
+For diagnostic purposes, you can use the [Metric for data path availability together with the health probe status](#vipavailabilityandhealthprobes).
Use **Average** as the aggregation for most scenarios.+ </details>
-#### Are the Backend Instances for my Load Balancer responding to probes?
+#### Are the backend instances for my load balancer responding to probes?
+ <details>+ <summary>Expand</summary>+ The health probe status metric describes the health of your application deployment as configured by you when you configure the health probe of your load balancer. The load balancer uses the status of the health probe to determine where to send new flows. Health probes originate from an Azure infrastructure address and are visible within the guest OS of the VM.
-To get the health probe status for your Standard Load Balancer resources:
+To get the health probe status for your standard load balancer resources:
+ 1. Select the **Health Probe Status** metric with **Avg** aggregation type.
-2. Apply a filter on the required Frontend IP address or port (or both).
+
+2. Apply a filter on the required frontend IP address or port (or both).
Health probes fail for the following reasons:-- You configure a health probe to a port that is not listening or not responding or is using the wrong protocol. If your service is using direct server return (DSR, or floating IP) rules, make sure that the service is listening on the IP address of the NIC's IP configuration and not just on the loopback that's configured with the front-end IP address.-- Your probe is not permitted by the Network Security Group, the VM's guest OS firewall, or the application layer filters.+
+- You configure a health probe to a port that isnΓÇÖt listening or not responding or is using the wrong protocol. If your service is using direct server return or floating IP rules, make sure that the service is listening on the IP address of the NIC's IP configuration and not just on the loopback that's configured with the front-end IP address.
+
+- Your probe isnΓÇÖt permitted by the Network Security Group, the VM's guest OS firewall, or the application layer filters.
Use **Average** as the aggregation for most scenarios.+ </details> #### How do I check my outbound connection statistics? + <details>+ <summary>Expand</summary>+ The SNAT connections metric describes the volume of successful and failed connections for [outbound flows](./load-balancer-outbound-connections.md). A failed connections volume of greater than zero indicates SNAT port exhaustion. You must investigate further to determine what may be causing these failures. SNAT port exhaustion manifests as a failure to establish an [outbound flow](./load-balancer-outbound-connections.md). Review the article about outbound connections to understand the scenarios and mechanisms at work, and to learn how to mitigate and design to avoid SNAT port exhaustion. To get SNAT connection statistics:+ 1. Select **SNAT Connections** metric type and **Sum** as aggregation. + 2. Group by **Connection State** for successful and failed SNAT connection counts to be represented by different lines.
-![SNAT connection](./media/load-balancer-standard-diagnostics/LBMetrics-SNATConnection.png)
-*Figure: Load Balancer SNAT connection count*
-</details>
+*Figure: Load balancer SNAT connection count*
+</details>
#### How do I check my SNAT port usage and allocation?+ <details>+ <summary>Expand</summary>
-The Used SNAT Ports metric tracks how many SNAT ports are being consumed to maintain outbound flows. This indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and does not have a public IP address. By comparing the number of SNAT ports you are using with the Allocated SNAT Ports metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
+
+The used SNAT ports metric tracks how many SNAT ports are being consumed to maintain outbound flows. This indicates how many unique flows are established between an internet source and a backend VM or virtual machine scale set that is behind a load balancer and doesnΓÇÖt have a public IP address. By comparing the number of SNAT ports youΓÇÖre using with the Allocated SNAT Ports metric, you can determine if your service is experiencing or at risk of SNAT exhaustion and resulting outbound flow failure.
If your metrics indicate risk of [outbound flow](./load-balancer-outbound-connections.md) failure, reference the article and take steps to mitigate this to ensure service health. To view SNAT port usage and allocation:+ 1. Set the time aggregation of the graph to 1 minute to ensure desired data is displayed.
-1. Select **Used SNAT Ports** and/or **Allocated SNAT Ports** as the metric type and **Average** as the aggregation
- * By default these metrics are the average number of SNAT ports allocated to or used by each backend VM or virtual machine scale set, corresponding to all frontend public IPs mapped to the Load Balancer, aggregated over TCP and UDP.
- * To view total SNAT ports used by or allocated for the load balancer use metric aggregation **Sum**
-1. Filter to a specific **Protocol Type**, a set of **Backend IPs**, and/or **Frontend IPs**.
-1. To monitor health per backend or frontend instance, apply splitting.
+
+2. Select **Used SNAT Ports** and/or **Allocated SNAT Ports** as the metric type and **Average** as the aggregation.
+
+ * By default these metrics are the average number of SNAT ports allocated to or used by each backend VM or virtual machine scale set, corresponding to all frontend public IPs mapped to the load balancer, aggregated over TCP and UDP.
+
+ * To view total SNAT ports used by or allocated for the load balancer use metric aggregation **Sum**.
+
+3. Filter to a specific **Protocol Type**, a set of **Backend IPs**, and/or **Frontend IPs**.
+
+4. To monitor health per backend or frontend instance, apply splitting.
+
* Note splitting only allows for a single metric to be displayed at a time.
-1. For example, to monitor SNAT usage for TCP flows per machine, aggregate by **Average**, split by **Backend IPs** and filter by **Protocol Type**.
-![SNAT allocation and usage](./media/load-balancer-standard-diagnostics/snat-usage-and-allocation.png)
+5. For example, to monitor SNAT usage for TCP flows per machine, aggregate by **Average**, split by **Backend IPs** and filter by **Protocol Type**.
+ *Figure: Average TCP SNAT port allocation and usage for a set of backend VMs*
-![SNAT usage by backend instance](./media/load-balancer-standard-diagnostics/snat-usage-split.png)
*Figure: TCP SNAT port usage per backend instance*+ </details> #### How do I check inbound/outbound connection attempts for my service?+ <details>+ <summary>Expand</summary>
-A SYN packets metric describes the volume of TCP SYN packets, which have arrived or were sent (for [outbound flows](../load-balancer-outbound-connections.md)) that are associated with a specific front end. You can use this metric to understand TCP connection attempts to your service.
+A SYN packets metric describes the volume of TCP SYN packets, which have arrived or were sent for outbound flows that are associated with a specific front end. You can use this metric to understand TCP connection attempts to your service.
+
+For more information on outbound connections, see [Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md)
Use **Sum** as the aggregation for most scenarios.
-![SYN connection](./media/load-balancer-standard-diagnostics/LBMetrics-SYNCount.png)
-*Figure: Load Balancer SYN count*
-</details>
+*Figure: Load balancer SYN count*
+</details>
#### How do I check my network bandwidth consumption? + <details>+ <summary>Expand</summary>+ The bytes and packet counters metric describes the volume of bytes and packets that are sent or received by your service on a per-front-end basis. Use **Sum** as the aggregation for most scenarios. To get byte or packet count statistics:+ 1. Select the **Bytes Count** and/or **Packet Count** metric type, with **Sum** as the aggregation. + 2. Do either of the following:
- * Apply a filter on a specific front-end IP, front-end port, back-end IP, or back-end port.
- * Get overall statistics for your load balancer resource without any filtering.
+
+ * Apply a filter on a specific front-end IP, front-end port, back-end IP, or back-end port.
+
+ * Get overall statistics for your load balancer resource without any filtering.
+
-![Byte count](./media/load-balancer-standard-diagnostics/LBMetrics-ByteCount.png)
+*Figure: Load balancer byte count*
-*Figure: Load Balancer byte count*
</details> #### <a name = "vipavailabilityandhealthprobes"></a>How do I diagnose my load balancer deployment?+ <details>+ <summary>Expand</summary>
-By using a combination of the Data Path Availability and Health Probe Status metrics on a single chart you can identify where to look for the problem and resolve the problem. You can gain assurance that Azure is working correctly and use this knowledge to conclusively determine that the configuration or application is the root cause.
-You can use health probe metrics to understand how Azure views the health of your deployment as per the configuration you have provided. Looking at health probes is always a great first step in monitoring or determining a cause.
+By using a combination of the data path availability and health probe status metrics on a single chart, you can identify where to look for the problem and resolve the problem. You can gain assurance that Azure is working correctly and use this knowledge to conclusively determine that the configuration or application is the root cause.
-You can take it a step further and use Data Path availability metric to gain insight into how Azure views the health of the underlying data plane that's responsible for your specific deployment. When you combine both metrics, you can isolate where the fault might be, as illustrated in this example:
+You can use health probe metrics to understand how Azure views the health of your deployment as per the configuration youΓÇÖve provided. Looking at health probes is always a great first step in monitoring or determining a cause.
-![Combining Data Path Availability and Health Probe Status metrics](./media/load-balancer-standard-diagnostics/lbmetrics-dipnvipavailability-2bnew.png)
+You can take it a step further and use data path availability metric to gain insight into how Azure views the health of the underlying data plane that's responsible for your specific deployment. When you combine both metrics, you can isolate where the fault might be, as illustrated in this example:
-*Figure: Combining Data Path Availability and Health Probe Status metrics*
+
+*Figure: Combining data path availability and health probe status metrics*
The chart displays the following information:+ - The infrastructure hosting your VMs was unavailable and at 0 percent at the beginning of the chart. Later, the infrastructure was healthy and the VMs were reachable, and more than one VM was placed in the back end. This information is indicated by the blue trace for data path availability, which was later at 100 percent. + - The health probe status, indicated by the purple trace, is at 0 percent at the beginning of the chart. The circled area in green highlights where the health probe status became healthy, and at which point the customer's deployment was able to accept new flows.
-The chart allows customers to troubleshoot the deployment on their own without having to guess or ask support whether other issues are occurring. The service was unavailable because health probes were failing due to either a misconfiguration or a failed application.
+The chart allows customers to troubleshoot the deployment on their own without having to guess or ask support whether other issues are occurring. The service was unavailable because health probes were
+failing due to either a misconfiguration or a failed application.
+ </details> ## Configure alerts for multi-dimensional metrics
-Azure Standard Load Balancer supports easily configurable alerts for multi-dimensional metrics. Configure custom thresholds for specific metrics to trigger alerts with varying levels of severity to empower a touchless resource monitoring experience.
+Azure Load Balancer supports easily configurable alerts for multi-dimensional metrics. Configure custom thresholds for specific metrics to trigger alerts with varying levels of severity to empower a no touch resource monitoring experience.
To configure alerts:+ 1. Go to the alert sub-blade for the load balancer
-1. Create new alert rule
+
+2. Create new alert rule
+
1. Configure alert condition
- 1. (Optional) Add action group for automated repair
- 1. Assign alert severity, name and description that enables intuitive reaction
+
+ 2. (Optional) Add action group for automated repair
+
+ 3. Assign alert severity, name, and description that enables intuitive reaction
### Inbound availability alerting+ To alert for inbound availability, you can create two separate alerts using the data path availability and health probe status metrics. Customers may have different scenarios that require specific alerting logic, but the below examples will be helpful for most configurations.
-Using data path availability, you can fire alerts whenever a specific load balancing rule becomes unavailable. You can configure this alert by setting an alert condition for the data path availability and splitting by all current values and future values for both Frontend Port and Frontend IP Address. Setting the alert logic to be less than or equal to 0 will cause this alert to be fired whenever any load balancing rule becomes unresponsive. Set the aggregation granularity and frequency of evaluation according to your desired evaluation.
+Using data path availability, you can fire alerts whenever a specific load-balancing rule becomes unavailable. You can configure this alert by setting an alert condition for the data path availability and splitting by all current values and future values for both frontend port and frontend IP address. Setting the alert logic to be less than or equal to 0 will cause this alert to be fired whenever any load-balancing rule becomes unresponsive. Set the aggregation granularity and frequency of evaluation according to your desired evaluation.
-With health probe status you can alert when a given backend instance fails to respond to the health probe for a significant amount of time. Set up your alert condition to use the health probe status metric and split by Backend IP Address and Backend Port. This will ensure that you can alert separately for each individual backend instanceΓÇÖs ability to serve traffic on a specific port. Use the **Average** aggregation type and set the threshold value according to how frequently your backend instance is probed and what you consider to be your healthy threshold.
+With health probe status you can alert when a given backend instance fails to respond to the health probe for a significant amount of time. Set up your alert condition to use the health probe status metric and split by backend IP address and backend port. This will ensure that you can alert separately for each individual backend instanceΓÇÖs ability to serve traffic on a specific port. Use the **Average** aggregation type and set the threshold value according to how frequently your backend instance is probed and what you consider to be your healthy threshold.
You can also alert on a backend pool level by not splitting by any dimensions and using the **Average** aggregation type. This will allow you to set up alert rules such as alert when 50% of my backend pool members are unhealthy. ### Outbound availability alerting
-To configure for outbound availability, you can configure two separate alerts using the SNAT Connection Count and Used SNAT Port metrics.
-To detect outbound connection failures, configure an alert using SNAT Connection Count and filtering to Connection State = Failed. Use the **Total** aggregation. You can then also split this by Backend IP Address set to all current and future values to alert separately for each backend instance experiencing failed connections. Set the threshold to be greater than zero or a higher number if you expect to see some outbound connection failures.
+To configure for outbound availability, you can configure two separate alerts using the SNAT connection count and used SNAT port metrics.
+
+To detect outbound connection failures, configure an alert using SNAT connection count and filtering to **Connection State = Failed**. Use the **Total** aggregation. You can then also split this by backend IP address set to all current and future values to alert separately for each backend instance experiencing failed connections. Set the threshold to be greater than zero or a higher number if you expect to see some outbound connection failures.
+
+With used SNAT ports you can alert on a higher risk of SNAT exhaustion and outbound connection failure. Ensure youΓÇÖre splitting by backend IP address and protocol when using this alert. Use the **Average** aggregation. Set the threshold to be greater than a percentage of the number of ports youΓÇÖve allocated per instance that you determine is unsafe. For example, configure a low severity alert when a backend instance uses 75% of its allocated ports. Configure a high severity alert when it uses 90% or 100% of its allocated ports.
-Through Used SNAT Ports you can alert on a higher risk of SNAT exhaustion and outbound connection failure. Ensure you are splitting by Backend IP address and Protocol when using this alert and use the **Average** aggregation. Set the threshold to be greater than a percentage(s) of the number of ports you have allocated per instance that you deem unsafe. For example, you may configure a low severity alert when a backend instance uses 75% of its allocated ports and a high severity when it uses 90% or 100% of its allocated ports.
## <a name = "ResourceHealth"></a>Resource health status
-Health status for the Standard Load Balancer resources is exposed via the existing **Resource health** under **Monitor > Service Health**. It is evaluated every **two minutes** by measuring Data Path Availability which determines whether your Frontend Load Balancing endpoints are available.
+Health status for the standard load balancer resources is exposed via the existing **Resource health** under **Monitor > Service health**. ItΓÇÖs evaluated every **two minutes** by measuring data path availability that determines whether your frontend load-balancing endpoints are available.
| Resource health status | Description | | | | | Available | Your standard load balancer resource is healthy and available. |
-| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The Datapath Availability metric has reported less than 90% but greater than 25% health for at least two minutes. You will experience moderate to severe performance impact. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events causing impacting your availability.
-| Unavailable | Your standard load balancer resource is not healthy. The Datapath Availability metric has reported less the 25% health for at least two minutes. You will experience significant performance impact or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events impacting your availability. |
-| Unknown | Resource health status for your standard load balancer resource has not been updated yet or has not received Data Path availability information for the last 10 minutes. This state should be transient and will reflect correct status as soon as data is received. |
+| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The metric for data path availability has reported less than 90% but greater than 25% health for at least two minutes. YouΓÇÖll experience moderate to severe performance effect. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events causing impacting your availability.
+| Unavailable | Your standard load balancer resource isnΓÇÖt healthy. The metric for data path availability has reported less the 25% health for at least two minutes. YouΓÇÖll experience significant performance effect or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. [Follow the troubleshooting RHC guide](./troubleshoot-rhc.md) to determine whether there are user initiated events impacting your availability. |
+| Unknown | Health status for your load balancer resource hasnΓÇÖt been updated or hasnΓÇÖt received information for data path availability for the last 10 minutes. This state should be transient and will reflect correct status as soon as data is received. |
+
+To view the health of your public standard load balancer resources:
-To view the health of your public Standard Load Balancer resources:
-1. Select **Monitor** > **Service Health**.
+1. Select **Monitor** > **Service health**.
- ![Monitor page](./media/load-balancer-standard-diagnostics/LBHealth1.png)
+ :::image type="content" source="./media/load-balancer-standard-diagnostics/lbhealth1.png" alt-text="The service health link on Azure Monitor.":::
- *Figure: The Service Health link on Azure Monitor*
+ *Figure: The service health link on Azure Monitor*
-2. Select **Resource Health**, and then make sure that **Subscription ID** and **Resource Type = Load Balancer** are selected.
+2. Select **Resource health**, and then make sure that **Subscription ID** and **Resource type = load balancer** are selected.
- ![Resource health status](./media/load-balancer-standard-diagnostics/LBHealth3.png)
+ :::image type="content" source="./media/load-balancer-standard-diagnostics/lbhealth3.png" alt-text="Select resource for health view.":::
*Figure: Select resource for health view*
-3. In the list, select the Load Balancer resource to view its historical health status.
+3. In the list, select the load balancer resource to view its historical health status.
- ![Load Balancer health status](./media/load-balancer-standard-diagnostics/LBHealth4.png)
+ :::image type="content" source="./media/load-balancer-standard-diagnostics/lbhealth4.png" alt-text="Resource health status.":::
- *Figure: Load Balancer resource health view*
+ *Figure: Resource health status*
-Generic resource health status description are available in the [RHC documentation](../service-health/resource-health-overview.md). For specific statuses for the Azure Load Balancer are listed in the below table:
-
+A generic description of a resource health status is available in the [resource health documentation](../service-health/resource-health-overview.md).
## Next steps -- Learn about [Network Analytics](../azure-monitor/insights/azure-networking-analytics.md)-- Learn about using [Insights](./load-balancer-insights.md) to view these metrics preconfigured for your Load Balancer-- Learn more about [Standard Load Balancer](./load-balancer-overview.md).-- Learn more about your [Load balancer outbound connectivity](./load-balancer-outbound-connections.md).-- Learn about [Azure Monitor](../azure-monitor/overview.md).-- Learn about the [Azure Monitor REST API](/rest/api/monitor/) and [how to retrieve metrics via REST API](/rest/api/monitor/metrics/list).
+- Learn about [Network Analytics](../azure-monitor/insights/azure-networking-analytics.md).
+- Learn about using [Insights](./load-balancer-insights.md) to view these metrics pre-configured for your load balancer.
+- Learn more about [Standard load balancer](./load-balancer-overview.md).
+
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-azure-pipelines.md
Previously updated : 11/30/2021 Last updated : 01/25/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every merge request and/or deployment by using Azure Pipelines.
You'll learn how to:
* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
-* A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-* An existing Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create_resource).
+* A GitHub account, where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
## Set up your repository
First, you'll install the Azure Load Testing extension from the Azure DevOps Mar
For every update to the main branch, the Azure pipeline executes the following steps: - Deploy the sample Node.js application to an Azure App Service web app. The name of the web app is configured in the pipeline definition.
+- Create an Azure Load Testing resource using the Azure Resource Manager (ARM) template present in the GitHub repository. Learn more about ARM templates [here](/azure/azure-resource-manager/templates/overview).
- Trigger Azure Load Testing to create and run the load test, based on the Apache JMeter script and the test configuration YAML file in the repository. To view the results of the load test in the pipeline log:
The following YAML code snippet describes how to use the task in an Azure Pipeli
[ { "name": "<Name of the secret>",
- "value": "$(mySecret1)",
+ "value": "$(mySecret1)"
}, { "name": "<Name of the secret>",
- "value": "$(mySecret1)",
+ "value": "$(mySecret1)"
} ] env: | [ { "name": "<Name of the variable>",
- "value": "<Value of the variable>",
+ "value": "<Value of the variable>"
}, { "name": "<Name of the variable>",
- "value": "<Value of the variable>",
+ "value": "<Value of the variable>"
} ] ```
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-github-actions.md
Previously updated : 01/21/2022 Last updated : 01/27/2022 #Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every pull request and/or deployment by using GitHub Actions.
You'll learn how to:
* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. * A GitHub account where you can create a repository. If you don't have one, you can [create one for free](https://github.com/).
-* An existing Azure Load Testing resource. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md#create_resource).
## Set up your repository
Update the *SampleApp.yaml* GitHub Actions workflow file to configure the parame
The GitHub Actions workflow executes the following steps for every update to the main branch: - Deploy the sample Node.js application to an Azure App Service web app. The name of the web app is configured in the workflow file.
+- Create an Azure Load Testing resource using the Azure Resource Manager (ARM) template present in the GitHub repository. Learn more about ARM templates [here](/azure/azure-resource-manager/templates/overview).
- Trigger Azure Load Testing to create and run the load test based on the Apache JMeter script and the test configuration YAML file in the repository. To view the results of the load test in the GitHub Actions workflow log:
logic-apps Logic Apps Enterprise Integration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-overview.md
Previously updated : 09/14/2021 Last updated : 01/27/2022 # B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack
The following diagram shows the high-level steps to start building B2B logic app
![Conceptual diagram showing prerequisite steps to create B2B logic app workflows.](media/logic-apps-enterprise-integration-overview/overview.png)
-## Try now
+## Try now sample
-[Deploy a fully operational sample logic app that sends and receives AS2 messages](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-as2-send-receive)
+To try this [sample](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-as2-send-receive), which deploys logic apps that send and receive AS2 messages through Azure, select **Deploy to Azure**. Before you run the sample, make sure that you manually update **FabrikamSales-AS2Send** logic app workflow so that the **HTTP** action's **URI** property uses the URI that's dynamically generated for the **Request** trigger in the **Contoso-AS2Receive** logic app.
## Next steps
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
ms.suite: integration
Previously updated : 08/18/2021 Last updated : 01/27/2022 # What is Azure Logic Apps?
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
Title: Reference guide for functions in expressions
-description: Reference guide to functions in expressions for Azure Logic Apps and Power Automate
+ Title: Reference guide for expression functions
+description: Reference guide to expression functions for Azure Logic Apps and Power Automate
ms.suite: integration Previously updated : 09/09/2021 Last updated : 01/27/2022
-# Reference guide to using functions in expressions for Azure Logic Apps and Power Automate
+# Reference guide to expression functions for Azure Logic Apps and Power Automate
-For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
+For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *expression functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
> [!NOTE] > This reference page applies to both Azure Logic Apps and Power Automate, but appears in the
Either way, both examples assign the result to the `customerName` property.
The following example shows the correct and incorrect syntax: **Correct**: `"<text>/@{<function-name>('<parameter-name>')}/<text>"`
-
+ **Incorrect**: `"<text>/@<function-name>('<parameter-name>')/<text>"`
-
+ **OK**: `"@<function-name>('<parameter-name>')"` The following sections organize functions based on their general purpose, or you can browse these functions in [alphabetical order](#alphabetical-list).
To work with collections, generally arrays, strings, and sometimes, dictionaries
| [empty](../logic-apps/workflow-definition-language-functions-reference.md#empty) | Check whether a collection is empty. | | [first](../logic-apps/workflow-definition-language-functions-reference.md#first) | Return the first item from a collection. | | [intersection](../logic-apps/workflow-definition-language-functions-reference.md#intersection) | Return a collection that has *only* the common items across the specified collections. |
-| [item](../logic-apps/workflow-definition-language-functions-reference.md#item) | When inside a repeating action over an array, return the current item in the array during the action's current iteration. |
+| [item](../logic-apps/workflow-definition-language-functions-reference.md#item) | If this function appears inside a repeating action over an array, return the current item in the array during the action's current iteration. |
| [join](../logic-apps/workflow-definition-language-functions-reference.md#join) | Return a string that has *all* the items from an array, separated by the specified character. | | [last](../logic-apps/workflow-definition-language-functions-reference.md#last) | Return the last item from a collection. | | [length](../logic-apps/workflow-definition-language-functions-reference.md#length) | Return the number of items in a string or array. |
To work with collections, generally arrays, strings, and sometimes, dictionaries
To work with conditions, compare values and expression results, or evaluate various kinds of logic, you can use these logical comparison functions. For the full reference about each function, see the [alphabetical list](../logic-apps/workflow-definition-language-functions-reference.md#alphabetical-list). > [!NOTE]
-> If you use logical functions or conditions to compare values, null values are converted to empty string (`""`) values. The behavior of conditions differs when you compare with an empty string instead of a null value. For more information, see the [string() function](#string).
+> If you use logical functions or conditions to compare values, null values are converted to empty string (`""`) values. The behavior of conditions differs when you compare with an empty string instead of a null value. For more information, see the [string() function](#string).
| Logical comparison function | Task | | | - |
To work with conditions, compare values and expression results, or evaluate vari
## Conversion functions
-To change a value's type or format, you can use these conversion functions. For example, you can change a value from a Boolean to an integer. For more information about how Logic Apps handles content types during conversion, see [Handle content types](../logic-apps/logic-apps-content-type.md). For the full reference about each function, see the [alphabetical list](../logic-apps/workflow-definition-language-functions-reference.md#alphabetical-list).
+To change a value's type or format, you can use these conversion functions. For example, you can change a value from a Boolean to an integer. For more information about how Azure Logic Apps handles content types during conversion, see [Handle content types](../logic-apps/logic-apps-content-type.md). For the full reference about each function, see the [alphabetical list](../logic-apps/workflow-definition-language-functions-reference.md#alphabetical-list).
> [!NOTE] > Azure Logic Apps automatically or implicitly performs base64 encoding and decoding, so you don't have to manually perform these conversions by using
For example, suppose a trigger returns a numerical value as output:
`triggerBody()?['123']`
-If you use this numerical output where string input is expected, such as a URL, Logic Apps automatically converts the value into a string by using the curly braces (`{}`) notation:
+If you use this numerical output where string input is expected, such as a URL, Azure Logic Apps automatically converts the value into a string by using the curly braces (`{}`) notation:
`@{triggerBody()?['123']}`
For the full reference about each function, see the
| [body](#body) | Return an action's `body` output at runtime. See also [actionBody](../logic-apps/workflow-definition-language-functions-reference.md#actionBody). | | [formDataMultiValues](../logic-apps/workflow-definition-language-functions-reference.md#formDataMultiValues) | Create an array with the values that match a key name in *form-data* or *form-encoded* action outputs. | | [formDataValue](../logic-apps/workflow-definition-language-functions-reference.md#formDataValue) | Return a single value that matches a key name in an action's *form-data* or *form-encoded output*. |
-| [item](../logic-apps/workflow-definition-language-functions-reference.md#item) | When inside a repeating action over an array, return the current item in the array during the action's current iteration. |
-| [items](../logic-apps/workflow-definition-language-functions-reference.md#items) | When inside a Foreach or Until loop, return the current item from the specified loop.|
-| [iterationIndexes](../logic-apps/workflow-definition-language-functions-reference.md#iterationIndexes) | When inside an Until loop, return the index value for the current iteration. You can use this function inside nested Until loops. |
+| [item](../logic-apps/workflow-definition-language-functions-reference.md#item) | If this function appears inside a repeating action over an array, return the current item in the array during the action's current iteration. |
+| [items](../logic-apps/workflow-definition-language-functions-reference.md#items) | If this function appears inside a Foreach or Until loop, return the current item from the specified loop.|
+| [iterationIndexes](../logic-apps/workflow-definition-language-functions-reference.md#iterationIndexes) | If this function appears inside an Until loop, return the index value for the current iteration. You can use this function inside nested Until loops. |
| [listCallbackUrl](../logic-apps/workflow-definition-language-functions-reference.md#listCallbackUrl) | Return the "callback URL" that calls a trigger or action. | | [multipartBody](../logic-apps/workflow-definition-language-functions-reference.md#multipartBody) | Return the body for a specific part in an action's output that has multiple parts. | | [outputs](../logic-apps/workflow-definition-language-functions-reference.md#outputs) | Return an action's output at runtime. |
For the full reference about each function, see the
| [xpath](../logic-apps/workflow-definition-language-functions-reference.md#xpath) | Check XML for nodes or values that match an XPath (XML Path Language) expression, and return the matching nodes or values. | |||
+##
+ <a name="alphabetical-list"></a> ## All functions - alphabetical list This section lists all the available functions in alphabetical order.
+## A
+ <a name="action"></a> ### action
array('hello')
And returns this result: `["hello"]`
+## B
+ <a name="base64"></a> ### base64
These examples show the different supported types of input for `bool()`:
| `bool('true')` | String | `true` | | `bool('false')` | String | `false` |
+## C
+ <a name="coalesce"></a> ### coalesce
createArray('h', 'e', 'l', 'l', 'o')
And returns this result: `["h", "e", "l", "l", "o"]`
+## D
+ <a name="dataUri"></a> ### dataUri
div(11,5.0)
div(11.0,5) ```
+## E
+ <a name="encodeUriComponent"></a> ### encodeUriComponent
And returns these results:
* First example: Both values are equivalent, so the function returns `true`. * Second example: Both values aren't equivalent, so the function returns `false`.
+## F
+ <a name="first"></a> ### first
Suppose that you want to format the number `17.35`. This example formats the num
formatNumber(17.35, 'C2', 'is-is') ```
+## G
+ <a name="getFutureTime"></a> ### getFutureTime
guid('P')
And returns this result: `"(c2ecc88d-88c8-4096-912c-d6f2e2b138ce)"`
+## I
+ <a name="if"></a> ### if
items('myForEachLoopName')
### iterationIndexes
-Return the index value for the current iteration inside an Until loop. You can use this function inside nested Until loops.
+Return the index value for the current iteration inside an Until loop. You can use this function inside nested Until loops.
``` iterationIndexes('<loopName>') ```
-| Parameter | Required | Type | Description |
-| | -- | - | -- |
-| <*loopName*> | Yes | String | The name for the Until loop |
-|||||
+| Parameter | Required | Type | Description |
+| | -- | - | -- |
+| <*loopName*> | Yes | String | The name for the Until loop |
+|||||
-| Return value | Type | Description |
-| | - | -- |
-| <*index*> | Integer | The index value for the current iteration inside the specified Until loop |
-||||
+| Return value | Type | Description |
+| | - | -- |
+| <*index*> | Integer | The index value for the current iteration inside the specified Until loop |
+||||
-*Example*
+*Example*
This example creates a counter variable and increments that variable by one during each iteration in an Until loop until the counter value reaches five. The example also creates a variable that tracks the current index for each iteration. In the Until loop, during each iteration, the example increments the counter and then assigns the counter value to the current index value and then increments the counter. While in the loop, this example references the current iteration index by using the `iterationIndexes` function:
This example creates a counter variable and increments that variable by one duri
} ```
+## J
+ <a name="json"></a> ### json
json(xml('value'))
> [!IMPORTANT] > Without an XML schema that defines the output's structure, the function might return results > where the structure greatly differs from the expected format, depending on the input.
->
+>
> This behavior makes this function unsuitable for scenarios where the output must conform > to a well-defined contract, for example, in critical business systems or solutions.
And returns this result:
### intersection
-Return a collection that has *only* the
-common items across the specified collections.
-To appear in the result, an item must appear in
-all the collections passed to this function.
-If one or more items have the same name,
-the last item with that name appears in the result.
+Return a collection that has *only* the common items across the specified collections. To appear in the result, an item must appear in
+all the collections passed to this function. If one or more items have the same name, the last item with that name appears in the result.
``` intersection([<collection1>], [<collection2>], ...)
join(createArray('a', 'b', 'c'), '.')
And returns this result: `"a.b.c"`
+## L
+ <a name="last"></a> ### last
And return these results:
### listCallbackUrl
-Return the "callback URL" that calls a trigger or action.
-This function works only with triggers and actions for the
-**HttpWebhook** and **ApiConnectionWebhook** connector types,
-but not the **Manual**, **Recurrence**, **HTTP**, and **APIConnection** types.
+Return the "callback URL" that calls a trigger or action. This function works only with triggers and actions for the **HttpWebhook** and **ApiConnectionWebhook** connector types, but not the **Manual**, **Recurrence**, **HTTP**, and **APIConnection** types.
``` listCallbackUrl()
This example shows a sample callback URL that this function might return:
`"https://prod-01.westus.logic.azure.com:443/workflows/<*workflow-ID*>/triggers/manual/run?api-version=2016-10-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<*signature-ID*>"`
+## M
+ <a name="max"></a> ### max
mod(4, -3)
The example returns these results: - * First example: `-1` * Second example: `1`
multipartBody('<actionName>', <index>)
| <*body*> | String | The body for the specified part | ||||
+## N
+ <a name="not"></a> ### not
And return these results:
* First example: The expression is false, so the function returns `true`. * Second example: The expression is true, so the function returns `false`.
+## O
+ <a name="or"></a> ### or
And returns this result:
} ```
+## P
+ <a name="parameters"></a> ### parameters
-Return the value for a parameter that is
-described in your workflow definition.
+Return the value for a parameter that is described in your workflow definition.
``` parameters('<parameterName>')
parameters('fullName')
And returns this result: `"Sophia Owen"`
+## R
+ <a name="rand"></a> ### rand
-Return a random integer from a specified range,
-which is inclusive only at the starting end.
+Return a random integer from a specified range, which is inclusive only at the starting end.
``` rand(<minValue>, <maxValue>)
range(<startIndex>, <count>)
*Example*
-This example creates an integer array that starts from
-the specified index and has the specified number of integers:
+This example creates an integer array that starts from the specified index and has the specified number of integers:
``` range(1, 4)
replace('<text>', '<oldText>', '<newText>')
*Example*
-This example finds the "old" substring in "the old string"
-and replaces "old" with "new":
+This example finds the "old" substring in "the old string" and replaces "old" with "new":
``` replace('the old string', 'old', 'new')
Here's how the example returned array might look where the outer `outputs` objec
] ```
+## S
+ <a name="setProperty"></a> ### setProperty
Here's the updated JSON object:
### skip
-Remove items from the front of a collection,
-and return *all the other* items.
+Remove items from the front of a collection, and return *all the other* items.
``` skip([<collection>], <count>)
And returns this array with the remaining items: `[1,2,3]`
### split
-Return an array that contains substrings, separated by commas,
-based on the specified delimiter character in the original string.
+Return an array that contains substrings, separated by commas, based on the specified delimiter character in the original string.
``` split('<text>', '<delimiter>')
split('<text>', '<delimiter>')
*Example 1*
-This example creates an array with substrings from the specified
-string based on the specified character as the delimiter:
+This example creates an array with substrings from the specified string based on the specified character as the delimiter:
``` split('a_b_c', '_')
And returns this result: `"2018-03-01"`
### startsWith
-Check whether a string starts with a specific substring.
-Return true when the substring is found, or return false when not found.
-This function is not case-sensitive.
+Check whether a string starts with a specific substring. Return true when the substring is found, or return false when not found. This function is not case-sensitive.
``` startsWith('<text>', '<searchText>')
startsWith('<text>', '<searchText>')
*Example 1*
-This example checks whether the "hello world"
-string starts with the "hello" substring:
+This example checks whether the "hello world" string starts with the "hello" substring:
``` startsWith('hello world', 'hello')
And returns this result: `true`
*Example 2*
-This example checks whether the "hello world"
-string starts with the "greetings" substring:
+This example checks whether the "hello world" string starts with the "greetings" substring:
``` startsWith('hello world', 'greetings')
string(<value>)
| <*string-value*> | String | The string version for the specified value. If the *value* parameter is null or evaluates to null, this value is returned as an empty string (`""`) value. | |||| ---- *Example 1* This example creates the string version for this number:
And returns this result: `"10"`
*Example 2*
-This example creates a string for the specified JSON object
-and uses the backslash character (\\)
-as an escape character for the double-quotation mark (").
+This example creates a string for the specified JSON object and uses the backslash character (\\) as an escape character for the double-quotation mark (").
``` string( { "name": "Sophie Owen" } )
substring('<text>', <startIndex>, <length>)
*Example*
-This example creates a five-character substring from the specified string,
-starting from the index value 6:
+This example creates a five-character substring from the specified string, starting from the index value 6:
``` substring('hello world', 6, 5)
And returns this result: `"world"`
### subtractFromTime
-Subtract a number of time units from a timestamp.
-See also [getPastTime](#getPastTime).
+Subtract a number of time units from a timestamp. See also [getPastTime](#getPastTime).
``` subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
subtractFromTime('2018-01-02T00:00:00Z', 1, 'Day', 'D')
And returns this result using the optional "D" format: `"Monday, January, 1, 2018"`
+## T
+ <a name="take"></a> ### take
take([<collection>], <count>)
*Example*
-These examples get the specified number of
-items from the front of these collections:
+These examples get the specified number of items from the front of these collections:
``` take('abcde', 3)
ticks('<timestamp>')
### toLower
-Return a string in lowercase format. If a character
-in the string doesn't have a lowercase version,
-that character stays unchanged in the returned string.
+Return a string in lowercase format. If a character in the string doesn't have a lowercase version, that character stays unchanged in the returned string.
``` toLower('<text>')
And returns this result: `"hello world"`
### toUpper
-Return a string in uppercase format. If a character
-in the string doesn't have an uppercase version,
-that character stays unchanged in the returned string.
+Return a string in uppercase format. If a character in the string doesn't have an uppercase version, that character stays unchanged in the returned string.
``` toUpper('<text>')
And returns this result: `"HELLO WORLD"`
### trigger
-Return a trigger's output at runtime,
-or values from other JSON name-and-value pairs,
-which you can assign to an expression.
+Return a trigger's output at runtime, or values from other JSON name-and-value pairs, which you can assign to an expression.
-* Inside a trigger's inputs, this function
-returns the output from the previous execution.
+* Inside a trigger's inputs, this function returns the output from the previous execution.
-* Inside a trigger's condition, this function
-returns the output from the current execution.
+* Inside a trigger's condition, this function returns the output from the current execution.
-By default, the function references the entire trigger object,
-but you can optionally specify a property whose value that you want.
-Also, this function has shorthand versions available,
-see [triggerOutputs()](#triggerOutputs) and [triggerBody()](#triggerBody).
+By default, the function references the entire trigger object, but you can optionally specify a property whose value that you want.
+Also, this function has shorthand versions available, see [triggerOutputs()](#triggerOutputs) and [triggerBody()](#triggerBody).
``` trigger()
trigger()
### triggerBody
-Return a trigger's `body` output at runtime.
-Shorthand for `trigger().outputs.body`.
-See [trigger()](#trigger).
+Return a trigger's `body` output at runtime. Shorthand for `trigger().outputs.body`. See [trigger()](#trigger).
``` triggerBody()
And returns this array as an example result: `["https://feeds.a.dj.com/rss/RSSMa
### triggerFormDataValue
-Return a string with a single value that matches a key
-name in a trigger's *form-data* or *form-encoded* output.
-If the function finds more than one match,
-the function throws an error.
+Return a string with a single value that matches a key name in a trigger's *form-data* or *form-encoded* output. If the function finds more than one match, the function throws an error.
``` triggerFormDataValue('<key>')
triggerFormDataValue('<key>')
*Example*
-This example creates a string from the "feedUrl" key value in
-an RSS trigger's form-data or form-encoded output:
+This example creates a string from the "feedUrl" key value in an RSS trigger's form-data or form-encoded output:
``` triggerFormDataValue('feedUrl')
triggerMultipartBody(<index>)
### triggerOutputs
-Return a trigger's output at runtime,
-or values from other JSON name-and-value pairs.
-Shorthand for `trigger().outputs`.
-See [trigger()](#trigger).
+Return a trigger's output at runtime, or values from other JSON name-and-value pairs. Shorthand for `trigger().outputs`. See [trigger()](#trigger).
``` triggerOutputs()
triggerOutputs()
### trim
-Remove leading and trailing whitespace from a string,
-and return the updated string.
+Remove leading and trailing whitespace from a string, and return the updated string.
``` trim('<text>')
trim('<text>')
*Example*
-This example removes the leading and trailing
-whitespace from the string " Hello World ":
+This example removes the leading and trailing whitespace from the string " Hello World ":
``` trim(' Hello World ')
trim(' Hello World ')
And returns this result: `"Hello World"`
+## U
+ <a name="union"></a> ### union
-Return a collection that has *all* the items from the specified collections.
-To appear in the result, an item can appear in any collection
-passed to this function. If one or more items have the same name,
-the last item with that name appears in the result.
+Return a collection that has *all* the items from the specified collections. To appear in the result, an item can appear in any collection
+passed to this function. If one or more items have the same name, the last item with that name appears in the result.
``` union('<collection1>', '<collection2>', ...)
And returns this result: `[1, 2, 3, 10, 101]`
### uriComponent
-Return a uniform resource identifier (URI) encoded version for a
-string by replacing URL-unsafe characters with escape characters.
-Use this function rather than [encodeUriComponent()](#encodeUriComponent).
-Although both functions work the same way,
-`uriComponent()` is preferred.
+Return a uniform resource identifier (URI) encoded version for a string by replacing URL-unsafe characters with escape characters. Use this function rather than [encodeUriComponent()](#encodeUriComponent). Although both functions work the same way, `uriComponent()` is preferred.
``` uriComponent('<value>')
And returns this result:
### uriComponentToString
-Return the string version for a uniform resource identifier (URI) encoded string,
-effectively decoding the URI-encoded string.
+Return the string version for a uniform resource identifier (URI) encoded string, effectively decoding the URI-encoded string.
``` uriComponentToString('<value>')
utcNow('D')
And returns this result: `"Sunday, April 15, 2018"`
+## V
+ <a name="variables"></a> ### variables
variables('<variableName>')
*Example*
-Suppose the current value for a "numItems" variable is 20.
-This example gets the integer value for this variable:
+Suppose the current value for a "numItems" variable is 20. This example gets the integer value for this variable:
``` variables('numItems')
variables('numItems')
And returns this result: `20`
+## W
+ <a name="workflow"></a> ### workflow
For example, you can send custom email notifications from the flow itself that l
`<a href=https://flow.microsoft.com/manage/environments/@{workflow()['tags']['environmentName']}/flows/@{workflow()['name']}/details>Open flow @{workflow()['tags']['flowDisplayName']}</a>`
+## X
+ <a name="xml"></a> ### xml
And returns this result XML:
*Example 2*
-This example creates the XML version for this string,
-which contains a JSON object:
+This example creates the XML version for this string, which contains a JSON object:
`xml(json('{ "name": "Sophia Owen" }'))`
This example passes in the XPath expression, `'/produce/item/name'`, to find the
The example also uses the [parameters()](#parameters) function to get the XML string from `'items'` and convert the string to XML format by using the [xml()](#xml) function.
-Here is the result array with the nodes that match `<name></name`:
+Here's the result array with the nodes that match `<name></name`:
`[ <name>Gala</name>, <name>Honeycrisp</name> ]`
Following on Example 1, this example passes in the XPath expression, `'/produce/
`xpath(xml(parameters('items')), '/produce/item/name[1]')`
-Here is the result: `Gala`
+Here's the result: `Gala`
*Example 3*
Following on Example 1, this example pass in the XPath expression, `'/produce/it
`xpath(xml(parameters('items')), '/produce/item/name[last()]')`
-Here is the result: `Honeycrisp`
+Here's the result: `Honeycrisp`
*Example 4*
This example passes in the XPath expression, `'//name[@expired]'`, to find all t
`xpath(xml(parameters('items')), '//name[@expired]')`
-Here is the result: `[ Gala, Honeycrisp ]`
+Here's the result: `[ Gala, Honeycrisp ]`
*Example 5*
This example passes in the XPath expression, `'//name[@expired = 'true']'`, to f
`xpath(xml(parameters('items')), '//name[@expired = 'true']')`
-Here is the result: `[ Gala ]`
+Here's the result: `[ Gala ]`
*Example 6*
This example passes in the XPath expression, `'//name[price>35]'`, to find all t
`xpath(xml(parameters('items')), '//name[price>35]')`
-Here is the result: `Honeycrisp`
+Here's the result: `Honeycrisp`
*Example 7*
This example finds nodes that match the `<count></count>` node and adds those no
`xpath(xml(parameters('items')), 'sum(/produce/item/count)')`
-Here is the result: `30`
+Here's the result: `30`
*Example 8*
These expressions use either XPath expression, `/*[name()="file"]/*[name()="loca
* `xpath(xml(body('Http')), '/*[name()="file"]/*[name()="location"]')` * `xpath(xml(body('Http')), '/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]')`
-Here is the result node that matches the `<location></location>` node:
+Here's the result node that matches the `<location></location>` node:
`<location xmlns="https://contoso.com">Paris</location>`
Following on Example 8, this example uses the XPath expression, `'string(/*[name
`xpath(xml(body('Http')), 'string(/*[name()="file"]/*[name()="location"])')`
-Here is the result: `Paris`
+Here's the result: `Paris`
## Next steps
-Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
+Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-differential-privacy.md
Differential privacy tries to protect against the possibility that a user can pr
Epsilon values are non-negative. Values below 1 provide full plausible deniability. Anything above 1 comes with a higher risk of exposure of the actual data. As you implement machine learning solutions with differential privacy, you want to data with epsilon values between 0 and 1.
-Another value directly correlated to epsilon is **delta**. Delta is a measure of the probability that a report is not fully private. The higher the delta, the higher the epsilon. Because these values are correlated, epsilon is used more often.
+Another value directly correlated to epsilon is **delta**. Delta is a measure of the probability that a report isnΓÇÖt fully private. The higher the delta, the higher the epsilon. Because these values are correlated, epsilon is used more often.
## Limit queries with a privacy budget
To ensure privacy in systems where multiple queries are allowed, differential pr
## Reliability of data
-Although the preservation of privacy should be the goal, there is a tradeoff when it comes to usability and reliability of the data. In data analytics, accuracy can be thought of as a measure of uncertainty introduced by sampling errors. This uncertainty tends to fall within certain bounds. **Accuracy** from a differential privacy perspective instead measures the reliability of the data, which is affected by the uncertainty introduced by the privacy mechanisms. In short, a higher level of noise or privacy translates to data that has a lower epsilon, accuracy, and reliability.
+Although the preservation of privacy should be the goal, thereΓÇÖs a tradeoff when it comes to usability and reliability of the data. In data analytics, accuracy can be thought of as a measure of uncertainty introduced by sampling errors. This uncertainty tends to fall within certain bounds. **Accuracy** from a differential privacy perspective instead measures the reliability of the data, which is affected by the uncertainty introduced by the privacy mechanisms. In short, a higher level of noise or privacy translates to data that has a lower epsilon, accuracy, and reliability.
## Open-source differential privacy libraries
The system library provides the following tools and services for working with ta
||| |Data Access | Library that intercepts and processes SQL queries and produces reports. This library is implemented in Python and supports the following ODBC and DBAPI data sources:<ul><li>PostgreSQL</li><li>SQL Server</li><li>Spark</li><li>Preston</li><li>Pandas</li></ul>| |Service | Execution service that provides a REST endpoint to serve requests or queries against shared data sources. The service is designed to allow composition of differential privacy modules that operate on requests containing different delta and epsilon values, also known as heterogeneous requests. This reference implementation accounts for additional impact from queries on correlated data. |
-|Evaluator | Stochastic evaluator that checks for privacy violations, accuracy, and bias. The evaluator supports the following tests: <ul><li>Privacy Test - Determines whether a report adheres to the conditions of differential privacy.</li><li>Accuracy Test - Measures whether the reliability of reports falls within the upper and lower bounds given a 95% confidence level.</li><li>Utility Test - Determines whether the confidence bounds of a report are close enough to the data while still maximizing privacy.</li><li>Bias Test - Measures the distribution of reports for repeated queries to ensure they are not unbalanced</li></ul> |
+|Evaluator | Stochastic evaluator that checks for privacy violations, accuracy, and bias. The evaluator supports the following tests: <ul><li>Privacy Test - Determines whether a report adheres to the conditions of differential privacy.</li><li>Accuracy Test - Measures whether the reliability of reports falls within the upper and lower bounds given a 95% confidence level.</li><li>Utility Test - Determines whether the confidence bounds of a report are close enough to the data while still maximizing privacy.</li><li>Bias Test - Measures the distribution of reports for repeated queries to ensure they arenΓÇÖt unbalanced</li></ul> |
## Next steps
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-fairness-ml.md
Two common types of AI-caused harms are:
- Harm of allocation: An AI system extends or withholds opportunities, resources, or information for certain groups. Examples include hiring, school admissions, and lending where a model might be much better at picking good candidates among a specific group of people than among other groups. -- Harm of quality-of-service: An AI system does not work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men.
+- Harm of quality-of-service: An AI system doesn’t work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men.
To reduce unfair behavior in AI systems, you have to assess and mitigate these harms.
Together, these components enable data scientists and business leaders t
## Assess fairness in machine learning models
-In the Fairlearn open-source package, fairness is conceptualized though an approach known as **group fairness**, which asks: Which groups of individuals are at risk for experiencing harms? The relevant groups, also known as subpopulations, are defined through **sensitive features** or sensitive attributes. Sensitive features are passed to an estimator in the Fairlearn open-source package as a vector or a matrix called `sensitive_features`. The term suggests that the system designer should be sensitive to these features when assessing group fairness.
+In the Fairlearn open-source package, fairness is conceptualized through an approach known as **group fairness**, which asks: Which groups of individuals are at risk for experiencing harms? The relevant groups, also known as subpopulations, are defined through **sensitive features** or sensitive attributes. Sensitive features are passed to an estimator in the Fairlearn open-source package as a vector or a matrix called `sensitive_features`. The term suggests that the system designer should be sensitive to these features when assessing group fairness.
Something to be mindful of is whether these features contain privacy implications due to private data. But the word "sensitive" doesn't imply that these features shouldn't be used to make predictions. >[!NOTE]
-> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can help you assess the fairness of a model, but it will not perform the assessment for you. The Fairlearn open-source package helps identify quantitative metrics to assess fairness, but developers must also perform a qualitative analysis to evaluate the fairness of their own models. The sensitive features noted above is an example of this kind of qualitative analysis.
+> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can help you assess the fairness of a model, but it will not perform the assessment for you. The Fairlearn open-source package helps identify quantitative metrics to assess fairness, but developers must also perform a qualitative analysis to evaluate the fairness of their own models. The sensitive features noted above is an example of this kind of qualitative analysis.
During assessment phase, fairness is quantified through disparity metrics. **Disparity metrics** can evaluate and compare model's behavior across different groups either as ratios or as differences. The Fairlearn open-source package supports two classes of disparity metrics:
During assessment phase, fairness is quantified through disparity metrics. **Dis
### Parity constraints
-The Fairlearn open-source package includes a variety of unfairness mitigation algorithms. These algorithms support a set of constraints on the predictor's behavior called **parity constraints** or criteria. Parity constraints require some aspects of the predictor behavior to be comparable across the groups that sensitive features define (e.g., different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
+The Fairlearn open-source package includes a variety of unfairness mitigation algorithms. These algorithms support a set of constraints on the predictor's behavior called **parity constraints** or criteria. Parity constraints require some aspects of the predictor behavior to be comparable across the groups that sensitive features define (for example, different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
>[!NOTE] > Mitigating unfairness in a model means reducing the unfairness, but this technical mitigation cannot eliminate this unfairness completely. The unfairness mitigation algorithms in the Fairlearn open-source package can provide suggested mitigation strategies to help reduce unfairness in a machine learning model, but they are not solutions to eliminate unfairness completely. There may be other parity constraints or criteria that should be considered for each particular developer's machine learning model. Developers using Azure Machine Learning must determine for themselves if the mitigation sufficiently eliminates any unfairness in their intended use and deployment of machine learning models.
The Fairlearn open-source package supports the following types of parity constra
The Fairlearn open-source package provides postprocessing and reduction unfairness mitigation algorithms: -- Reduction: These algorithms take a standard black-box machine learning estimator (e.g., a LightGBM model) and generate a set of retrained models using a sequence of re-weighted training datasets. For example, applicants of a certain gender might be up-weighted or down-weighted to retrain models and reduce disparities across different gender groups. Users can then pick a model that provides the best trade-off between accuracy (or other performance metric) and disparity, which generally would need to be based on business rules and cost calculations. -- Post-processing: These algorithms take an existing classifier and the sensitive feature as input. Then, they derive a transformation of the classifier's prediction to enforce the specified fairness constraints. The biggest advantage of threshold optimization is its simplicity and flexibility as it does not need to retrain the model.
+- Reduction: These algorithms take a standard black-box machine learning estimator (for example, a LightGBM model) and generate a set of retrained models using a sequence of re-weighted training datasets. For example, applicants of a certain gender might be up-weighted or down-weighted to retrain models and reduce disparities across different gender groups. Users can then pick a model that provides the best trade-off between accuracy (or other performance metric) and disparity, which generally would need to be based on business rules and cost calculations.
+- Post-processing: These algorithms take an existing classifier and the sensitive feature as input. Then, they derive a transformation of the classifier's prediction to enforce the specified fairness constraints. The biggest advantage of threshold optimization is its simplicity and flexibility as it doesnΓÇÖt need to retrain the model.
| Algorithm | Description | Machine learning task | Sensitive features | Supported parity constraints | Algorithm Type | | | | | | | |
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-responsible-ml.md
In this article, you'll learn what responsible AI is and ways you can put it int
## Responsible AI principles
-Throughout the development and use of AI systems, trust must be at the core. Trust in the platform, process, and models. At Microsoft, responsible AI with regards tomachine learning encompasses the following values and principles:
+Throughout the development and use of AI systems, trust must be at the core. Trust in the platform, process, and models. At Microsoft, responsible AI with regard to machine learning encompasses the following values and principles:
- Understand machine learning models - Interpret and explain model behavior
Unfairness in AI systems can result in the following unintended consequences:
- Withholding opportunities, resources or information from individuals. - Reinforcing biases and stereotypes.
-Many aspects of fairness cannot be captured or represented by metrics. There are tools and practices that can improve fairness in the design and development of AI systems.
+Many aspects of fairness canΓÇÖt be captured or represented by metrics. There are tools and practices that can improve fairness in the design and development of AI systems.
Two key steps in reducing unfairness in AI systems are assessment and mitigation. We recommend [FairLearn](https://github.com/fairlearn/fairlearn), an open-source package that can assess and mitigate the potential unfairness of AI systems. To learn more about fairness and the FairLearn package, see the [Fairness in ML article](./concept-fairness-ml.md).
See the following sample to learn [how to deploy an encrypted inferencing web se
Documenting the right information in the machine learning process is key to making responsible decisions at each stage. Datasheets are a way to document machine learning assets that are used and created as part of the machine learning lifecycle.
-Models tend to be thought of as "opaque boxes" and often there is little information about them. Because machine learning systems are becoming more pervasive and are used for decision making, using datasheets is a step towards developing more responsible machine learning systems.
+Models tend to be thought of as "opaque boxes" and often thereΓÇÖs little information about them. Because machine learning systems are becoming more pervasive and are used for decision making, using datasheets is a step towards developing more responsible machine learning systems.
Some model information you might want to document as part of a datasheet:
Some model information you might want to document as part of a datasheet:
- Training data used - Evaluation data used - Training model performance metrics-- Fairness information.
+- Fairness information
See the following sample to learn how to use the Azure Machine Learning SDK to implement [datasheets for models](https://github.com/microsoft/MLOps/blob/master/pytorch_with_datasheet/model_with_datasheet.ipynb).
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
In this how-to guide, you learn to use the interpretability package of the Azure
* Upload explanations to Azure Machine Learning Run History.
-* Use a visualization dashboard to interact with your model explanations, both in a Jupyter notebook and in the Azure Machine Learning studio.
+* Use a visualization dashboard to interact with your model explanations, both in a Jupyter Notebook and in the Azure Machine Learning studio.
* Deploy a scoring explainer alongside your model to observe explanations during inferencing.
The following example shows how to use the interpretability package on your pers
classes=classes) ```
-### Explain the entire model behavior (global explanation)
+### Explain the entire model behavior (global explanation)
Refer to the following example to help you get the aggregate (global) feature importance values.
tabular_explainer = TabularExplainer(clf.steps[-1][1],
## Generate feature importance values via remote runs
-The following example shows how you can use the `ExplanationClient` class to enable model interpretability for remote runs. It is conceptually similar to the local process, except you:
+The following example shows how you can use the `ExplanationClient` class to enable model interpretability for remote runs. ItΓÇÖs conceptually similar to the local process, except you:
* Use the `ExplanationClient` in the remote run to upload the interpretability context. * Download the context later in a local environment.
ExplanationDashboard(global_explanation, model, datasetX=x_test)
The visualizations support explanations on both engineered and raw features. Raw explanations are based on the features from the original dataset and engineered explanations are based on the features from the dataset with feature engineering applied.
-When attempting to interpret a model with respect to the original dataset, it is recommended to use raw explanations as each feature importance will correspond to a column from the original dataset. One scenario where engineered explanations might be useful is when examining the impact of individual categories from a categorical feature. If a one-hot encoding is applied to a categorical feature, then the resulting engineered explanations will include a different importance value per category, one per one-hot engineered feature. This encoding can be useful when narrowing down which part of the dataset is most informative to the model.
+When attempting to interpret a model with respect to the original dataset, itΓÇÖs recommended to use raw explanations as each feature importance will correspond to a column from the original dataset. One scenario where engineered explanations might be useful is when examining the impact of individual categories from a categorical feature. If a one-hot encoding is applied to a categorical feature, then the resulting engineered explanations will include a different importance value per category, one per one-hot engineered feature. This encoding can be useful when narrowing down which part of the dataset is most informative to the model.
> [!NOTE] > Engineered and raw explanations are computed sequentially. First an engineered explanation is created based on the model and featurization pipeline. Then the raw explanation is created based on that engineered explanation by aggregating the importance of engineered features that came from the same raw feature.
-### Create, edit and view dataset cohorts
+### Create, edit, and view dataset cohorts
The top ribbon shows the overall statistics on your model and data. You can slice and dice your data into dataset cohorts, or subgroups, to investigate or compare your modelΓÇÖs performance and explanations across these defined subgroups. By comparing your dataset statistics and explanations across those subgroups, you can get a sense of why possible errors are happening in one group versus another.
The top ribbon shows the overall statistics on your model and data. You can slic
The first three tabs of the explanation dashboard provide an overall analysis of the trained model along with its predictions and explanations. #### Model performance
-Evaluate the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics. You can further investigate your model by looking at a comparative analysis of its performance across different cohorts or subgroups of your dataset. Select filters along y-value and x-value to cut across different dimensions. View metrics such as accuracy, precision, recall, false positive rate (FPR) and false negative rate (FNR).
+Evaluate the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics. You can further investigate your model by looking at a comparative analysis of its performance across different cohorts or subgroups of your dataset. Select filters along y-value and x-value to cut across different dimensions. View metrics such as accuracy, precision, recall, false positive rate (FPR), and false negative rate (FNR).
[![Model performance tab in the explanation visualization](./media/how-to-machine-learning-interpretability-aml/model-performance.gif)](./media/how-to-machine-learning-interpretability-aml/model-performance.gif#lightbox)
Explore your dataset statistics by selecting different filters along the X, Y, a
[![Dataset explorer tab in the explanation visualization](./media/how-to-machine-learning-interpretability-aml/dataset-explorer.gif)](./media/how-to-machine-learning-interpretability-aml/dataset-explorer.gif#lightbox) #### Aggregate feature importance
-Explore the top-k important features that impact your overall model predictions (also known as global explanation). Use the slider to show descending feature importance values. Select up to three cohorts to see their feature importance values side by side. Click on any of the feature bars in the graph to see how values of the selected feature impact model prediction in the dependence plot below.
+Explore the top-k important features that impact your overall model predictions (also known as global explanation). Use the slider to show descending feature importance values. Select up to three cohorts to see their feature importance values side by side. Select any of the feature bars in the graph to see how values of the selected feature impact model prediction in the dependence plot below.
[![Aggregate feature importance tab in the explanation visualization](./media/how-to-machine-learning-interpretability-aml/aggregate-feature-importance.gif)](./media/how-to-machine-learning-interpretability-aml/aggregate-feature-importance.gif#lightbox)
The fourth tab of the explanation tab lets you drill into an individual datapoin
### Visualization in Azure Machine Learning studio
-If you complete the [remote interpretability](how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs) steps (uploading generated explanations to Azure Machine Learning Run History), you can view the visualizations on the explanations dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the dashboard widget that's generated within your Jupyter notebook. What-If datapoint generation and ICE plots are disabled as there is no active compute in Azure Machine Learning studio that can perform their real time computations.
+If you complete the [remote interpretability](how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs) steps (uploading generated explanations to Azure Machine Learning Run History), you can view the visualizations on the explanations dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the dashboard widget that's generated within your Jupyter Notebook. What-If datapoint generation and ICE plots are disabled as thereΓÇÖs no active compute in Azure Machine Learning studio that can perform their real-time computations.
-If the dataset, global, and local explanations are available, data populates all of the tabs. If only a global explanation is available, the Individual feature importance tab will be disabled.
+If the dataset, global, and local explanations are available, data populates all of the tabs. However, if only a global explanation is available, the Individual feature importance tab will be disabled.
Follow one of these paths to access the explanations dashboard in Azure Machine Learning studio:
You can deploy the explainer along with the original model and use it at inferen
## Troubleshooting
-* **Sparse data not supported**: The model explanation dashboard breaks/slows down substantially with a large number of features, therefore we currently do not support sparse data format. Additionally, general memory issues will arise with large datasets and large number of features.
+* **Sparse data not supported**: The model explanation dashboard breaks/slows down substantially with a large number of features, therefore we currently donΓÇÖt support sparse data format. Additionally, general memory issues will arise with large datasets and large number of features.
+ * **Supported explanations features matrix** Supported explanation tab | Raw features (dense) | Raw features (sparse) | Engineered features (dense) | Engineered features (sparse) :-- | : | : | :- | :- | Model performance | Supported (not forecasting) | Supported (not forecasting) | Supported | Supported |
-Dataset explorer | Supported (not forecasting) | Not supported. Since sparse data is not uploaded and UI has issues rendering sparse data. | Supported | Not supported. Since sparse data is not uploaded and UI has issues rendering sparse data. |
+Dataset explorer | Supported (not forecasting) | Not supported. Since sparse data isnΓÇÖt uploaded and UI has issues rendering sparse data. | Supported | Not supported. Since sparse data isnΓÇÖt uploaded and UI has issues rendering sparse data. |
Aggregate feature importance | Supported | Supported | Supported | Supported |
- Individual feature importance| Supported (not forecasting) | Not supported. Since sparse data is not uploaded and UI has issues rendering sparse data. | Supported | Not supported. Since sparse data is not uploaded and UI has issues rendering sparse data. |
--
-* **Forecasting models not supported with model explanations**: Interpretability, best model explanation, is not available for AutoML forecasting experiments that recommend the following algorithms as the best model: TCNForecaster, AutoArima, Prophet, ExponentialSmoothing, Average, Naive, Seasonal Average, and Seasonal Naive. AutoML Forecasting regression models support explanations. However, in the explanation dashboard, the "Individual feature importance" tab is not supported for forecasting because of complexity in their data pipelines.
+ Individual feature importance| Supported (not forecasting) | Not supported. Since sparse data isnΓÇÖt uploaded and UI has issues rendering sparse data. | Supported | Not supported. Since sparse data isnΓÇÖt uploaded and UI has issues rendering sparse data. |
-* **Local explanation for data index**: The explanation dashboard does not support relating local importance values to a row identifier from the original validation dataset if that dataset is greater than 5000 datapoints as the dashboard randomly downsamples the data. However, the dashboard shows raw dataset feature values for each datapoint passed into the dashboard under the Individual feature importance tab. Users can map local importances back to the original dataset through matching the raw dataset feature values. If the validation dataset size is less than 5000 samples, the `index` feature in AzureML studio will correspond to the index in the validation dataset.
+* **Forecasting models not supported with model explanations**: Interpretability, best model explanation, isnΓÇÖt available for AutoML forecasting experiments that recommend the following algorithms as the best model: TCNForecaster, AutoArima, Prophet, ExponentialSmoothing, Average, Naive, Seasonal Average, and Seasonal Naive. AutoML Forecasting regression models support explanations. However, in the explanation dashboard, the "Individual feature importance" tab isnΓÇÖt supported for forecasting because of complexity in their data pipelines.
-* **What-if/ICE plots not supported in studio**: What-If and Individual Conditional Expectation (ICE) plots are not supported in Azure Machine Learning studio under the Explanations tab since the uploaded explanation needs an active compute to recalculate predictions and probabilities of perturbed features. It is currently supported in Jupyter notebooks when run as a widget using the SDK.
+* **Local explanation for data index**: The explanation dashboard doesnΓÇÖt support relating local importance values to a row identifier from the original validation dataset if that dataset is greater than 5000 datapoints as the dashboard randomly downsamples the data. However, the dashboard shows raw dataset feature values for each datapoint passed into the dashboard under the Individual feature importance tab. Users can map local importances back to the original dataset through matching the raw dataset feature values. If the validation dataset size is less than 5000 samples, the `index` feature in AzureML studio will correspond to the index in the validation dataset.
+* **What-if/ICE plots not supported in studio**: What-If and Individual Conditional Expectation (ICE) plots arenΓÇÖt supported in Azure Machine Learning studio under the Explanations tab since the uploaded explanation needs an active compute to recalculate predictions and probabilities of perturbed features. ItΓÇÖs currently supported in Jupyter notebooks when run as a widget using the SDK.
## Next steps
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
> [!IMPORTANT] > Only AzureML Compute cluster of CPU SKU is supported for the image build on compute.
- >
- > Your storage account, compute cluster, and Azure Container Registry must all be in the same subnet of the virtual network.
For more information, see the [update()](/python/api/azureml-core/azureml.core.workspace.workspace#update-friendly-name-none--description-none--tags-none--image-build-compute-none--enable-data-actions-none-) method reference.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-python-sdk.md
If you don't have an Azure subscription, create a free account before you begin.
## Prerequisites * Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace.
-* A Python environment in which you've installed both the `azureml-core` and `azureml-pipelines` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
+* A Python environment in which you've installed both the `azureml-core` and `azureml-pipeline` packages. This environment is for defining and controlling your Azure Machine Learning resources and is separate from the environment used at runtime for training.
> [!Important]
-> Currently, the most recent Python release compatible with `azureml-pipelines` is Python 3.8. If you've difficulty installing the `azureml-pipelines` package, ensure that `python --version` is a compatible release. Consult the documentation of your Python virtual environment manager (`venv`, `conda`, and so on) for instructions.
+> Currently, the most recent Python release compatible with `azureml-pipeline` is Python 3.8. If you've difficulty installing the `azureml-pipeline` package, ensure that `python --version` is a compatible release. Consult the documentation of your Python virtual environment manager (`venv`, `conda`, and so on) for instructions.
## Start an interactive Python session
if not found:
### Create a dataset for the Azure-stored data
-Fashion-MNIST] is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/) .
+Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image is a 28x28 grayscale image and there are 60,000 training and 10,000 test images. As an image classification problem, Fashion-MNIST is harder than the classic MNIST handwritten digit database. It's distributed in the same compressed binary form as the original [handwritten digit database](http://yann.lecun.com/exdb/mnist/) .
To create a `Dataset` that references the Web-based data, run:
Most of this code should be familiar to ML developers:
* The number of training epochs will be 10 * The model has three convolutional layers, with max pooling and dropout, followed by a dense layer and softmax head * The model is fitted for 10 epochs and then evaluated
-* The model architecture is written to "outputs/model/model.json" and the weights to `outputs/model/model.h5`
+* The model architecture is written to `outputs/model/model.json` and the weights to `outputs/model/model.h5`
Some of the code, though, is specific to Azure Machine Learning. `run = Run.get_context()` retrieves a [`Run`](/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=True) object, which contains the current service context. The `train.py` source uses this `run` object to retrieve the input dataset via its name (an alternative to the code in `prepare.py` that retrieved the dataset via the `argv` array of script arguments).
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/howto-manage-vnet-cli.md
ms.devlang: azurecli Previously updated : 3/18/2020 Last updated : 01/26/2022 # Create and manage Azure Database for MariaDB VNet service endpoints using Azure CLI
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-change-server-configuration.md
ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 01/26/2022 # List and update configurations of an Azure Database for MariaDB server using Azure CLI
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 01/26/2022 # Create a MariaDB server and configure a firewall rule using the Azure CLI
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 01/26/2022 # Create a MariaDB server and configure a vNet rule using the Azure CLI
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-point-in-time-restore.md
ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 01/26/2022 # Restore an Azure Database for MariaDB server using Azure CLI
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-scale-server.md
ms.devlang: azurecli Previously updated : 01/20/2022 Last updated : 01/26/2022 # Monitor and scale an Azure Database for MariaDB server using Azure CLI
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/scripts/sample-server-logs.md
ms.devlang: azurecli Previously updated : 01/11/2022 Last updated : 01/26/2022 # Enable and download server slow query logs of an Azure Database for MariaDB server using Azure CLI
marketplace Analytics System Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/analytics-system-queries.md
description: Learn about system queries you can use to programmatically get anal
Previously updated : 12/03/2021 Last updated : 01/20/2022
The following system queries can be used in the [Create Report API](analytics-programmatic-access.md#create-report-api) directly with a `QueryId`. The system queries are like the export reports in Partner Center for a six month (6M) computation period.
-For more information on the column names, attributes, and description, see these articles:
+For more information on the column names, attributes, and description, see these articles about commercial marketplace analytics:
-- [Customers dashboard in commercial marketplace analytics](customer-dashboard.md#customer-details-table)-- [Orders dashboard in commercial marketplace analytics](orders-dashboard.md#orders-details-table)-- [Usage dashboard in commercial marketplace analytics](usage-dashboard.md#usage-details-table)-- [Marketplace Insights dashboard in commercial marketplace analytics](insights-dashboard.md#marketplace-insights-details-table)
+- [Customers dashboard](customer-dashboard.md#customer-details-table)
+- [Orders dashboard](orders-dashboard.md#orders-details-table)
+- [Usage dashboard](usage-dashboard.md#usage-details-table)
+- [Marketplace Insights dashboard](insights-dashboard.md#marketplace-insights-details-table)
+- [Revenue dashboard](revenue-dashboard.md)
+- [Quality of Service dashboard](quality-of-service-dashboard.md)
-The following sections provide report queries for various reports.
+The following sections provide various report queries.
## Customers report query
The following sections provide report queries for various reports.
`SELECT AssetId,SalesChannel,BillingAccountId,CustomerCity,CustomerCompanyName,CustomerCountry,CustomerEmail,CustomerId,CustomerName,CustomerState,EarningAmountCC,EarningAmountPC,EarningAmountUSD,EarningCurrencyCode,EarningExchangeRatePC,EstimatedPayoutMonth,Revenue,EstimatedRevenuePC,EstimatedRevenueUSD,ExchangeRateDate,ExchangeRatePC,ExchangeRateUSD,PayoutStatus,IncentiveRate,TrialDeployment,LineItemId,MonthStartDate,OfferName,OfferType,PaymentInstrumentType,PaymentSentDate,PurchaseRecordId,Quantity,SKU,TermEndDate,TermStartDate,TransactionAmountCC,TransactionAmountPC,TransactionAmountUSD,BillingModel,Units FROM ISVRevenue TIMESPAN LAST_6_MONTHS`
+## Quality of service report query
+
+**Report description**: Quality of service report for the last 3M
+
+**QueryID**: `q9df4939-073f-5795-b0cb-v2c81d11e58d`
+
+**Report query**:
+
+`SELECT OfferId,Sku,DeploymentStatus,DeploymentCorrelationId,SubscriptionId,CustomerTenantId,CustomerName,TemplateType,StartTime,EndTime,DeploymentDurationInMilliSeconds,DeploymentRegion,ResourceProvider,ResourceUri,ResourceGroup,ResourceType,ResourceName,ErrorCode,ErrorName,ErrorMessage,DeepErrorCode,DeepErrorMessage FROM ISVQualityOfService TIMESPAN LAST_3_MONTHS`
+ ## Next steps - [APIs for accessing commercial marketplace analytics data](analytics-available-apis.md)
marketplace Revenue Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/revenue-dashboard.md
In the lower left of most widgets, youΓÇÖll see a thumbs up and thumbs down icon
| Data field | Definition | |-||
-|<img width=200/>|<img width=500/>|
+| <img width=130/> | |
| Billed revenue | Represents billed sales of a partner for customerΓÇÖs offer purchases and consumption through the commercial marketplace. This is in transaction currency and will always be present in download reports. | | Estimated revenue (USD) | Estimated revenue reported in US dollars. This column will always be present in download reports. | | Estimated revenue (PC) | Estimated revenue reported in partner preferred currency. This column will always be present in download reports. |
marketplace Test Drive Azure Subscription Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-drive-azure-subscription-setup.md
This article explains how to set up an Azure Marketplace subscription and **Dyna
:::image type="content" source="./media/test-drive/sign-in-to-account.png" alt-text="Signing in to your account.":::
-6. Create a new Security Group and add it to Canvas App (Power Apps). This step is only applicable to Dynamics 365 for Customer Engagement & PowerApps offers with the Canvas Apps option.
+6. Create a new Security Group and add it to Canvas App (Power Apps). This step is only applicable to Dynamics 365 for Customer Engagement & Power Apps offers with the Canvas Apps option.
1. Create a new Security Group. 1. Go to **Azure Active Directory**. 1. Under **Manage**, select **Groups**.
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
You'll find description and links to the samples you may be looking for in each
|Sample|Description| |||
-|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
-|[Create an account with user assigned managed identity code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account_with_managed_identity.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, user or system assigned Managed Identity, storage auth, and bring your own encryption key.|
+|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
+|[Create an account with user assigned managed identity code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/create-account_with_managed_identity.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, user or system assigned Managed Identity, storage auth, and bring your own encryption key.|
|[Hello World - list assets](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/HelloWorld-ListAssets/list-assets.ts)|Basic example of how to connect and list assets | |[Live streaming with Standard Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event/index.ts)| Standard passthrough live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live| |[Live streaming with Standard Passthrough with Event Hubs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event_with_EventHub/index.ts)| Demonstrates how to use Event Hubs to subscribe to events on the live streaming channel. Events include encoder connections, disconnections, heartbeat, latency, discontinuity, and drift issues. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
media-services Legacy Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/legacy-components.md
na Previously updated : 08/24/2021 Last updated : 01/26/2022 # Azure Media Services legacy components
The *Windows Azure Media Encoder* (WAME) and *Azure Media Encoder* (AME) media p
The following Media Analytics media processors are either deprecated or soon to be deprecated:
-| **Media processor name** | **Retirement date** | **Additional notes** |
+| Media processor name | Retirement date | Additional notes |
| | | |
-| Azure Media Indexer | January 1st, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
-| Azure Media Indexer 2 | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
-| Motion Detection | June 1st, 2020|No replacement plans at this time. |
-| Video Summarization |June 1st, 2020|No replacement plans at this time.|
-| Video Optical Character Recognition | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
-| Face Detector | June 1st, 2020 | This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
-| Content Moderator | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Azure Media Indexer | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
+| Azure Media Indexer 2 | January 1, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
+| Motion Detection | June 1, 2020|No replacement plans at this time. |
+| Video Summarization |June 1, 2020|No replacement plans at this time.|
+| Video Optical Character Recognition | June 1, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Face Detector | June 1, 2020 | This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Content Moderator | June 1, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
| Media Encoder Premium Workflow | February 29, 2024 | The AMS v2 API no longer supports the Premium Encoder. If you previously used the workflow-based Premium Encoder for HEVC encoding, you should migrate to the [new v3 Standard Encoder](../latest/encode-media-encoder-standard-formats-reference.md) with HEVC encoding support. <br/> If you require advanced workflow features of the Premium Encoder, you're encouraged to start using an Azure advanced encoding partner from [Imagine Communications](https://imaginecommunications.com/), [Telestream](https://telestream.net), or [Bitmovin](https://bitmovin.com). |
-## Next steps
+## Next step
[Migration guidance for moving from Media Services v2 to v3](../latest/migrate-v-2-v-3-migration-introduction.md)
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/connect-nodejs.md
Get the connection information needed to connect to the Azure Database for MySQL
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. :::image type="content" source="./media/connect-nodejs/server-name-azure-database-mysql.png" alt-text="Azure Database for MySQL server name":::
-## Running the JavaScript code in Node.js
+## Running the code samples
-1. Paste the JavaScript code into text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
-2. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
-3. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
-4. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
+1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
+1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the server and database.
+1. **Obtain SSL certificate**: Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive.
+
+ **For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem.
+
+ See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
+1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file.
+1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
+1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
+1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
## Connect, create table, and insert data
Use the following code to connect and load the data by using **CREATE TABLE** an
The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
-Replace the `host`, `user`, `password`, and `database` parameters with the values that you specified when you created the server and database.
- ```javascript const mysql = require('mysql');
+const fs = require('fs');
var config = {
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: true
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
}; const conn = new mysql.createConnection(config);
Use the following code to connect and read the data by using a **SELECT** SQL st
The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
-Replace the `host`, `user`, `password`, and `database` parameters with the values that you specified when you created the server and database.
- ```javascript const mysql = require('mysql');
+const fs = require('fs');
var config = {
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: true
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
}; const conn = new mysql.createConnection(config);
function readData(){
## Update data
-Use the following code to connect and read the data by using an **UPDATE** SQL statement.
+Use the following code to connect and update data by using an **UPDATE** SQL statement.
The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
-Replace the `host`, `user`, `password`, and `database` parameters with the values that you specified when you created the server and database.
- ```javascript const mysql = require('mysql');
+const fs = require('fs');
var config = {
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: true
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
}; const conn = new mysql.createConnection(config);
function updateData(){
## Delete data
-Use the following code to connect and read the data by using a **DELETE** SQL statement.
+Use the following code to connect and delete data by using a **DELETE** SQL statement.
The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
-Replace the `host`, `user`, `password`, and `database` parameters with the values that you specified when you created the server and database.
```javascript const mysql = require('mysql');
+const fs = require('fs');
var config = {
var config =
password: 'your_password', database: 'quickstartdb', port: 3306,
- ssl: true
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_BaltimoreCyberTrustRoot.crt.pem")}
}; const conn = new mysql.createConnection(config);
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-audit-logs.md
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlAuditLogs' and event_class_s == "general_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
| order by TimeGenerated asc nulls last ```
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlAuditLogs' and event_class_s == "connection_log"
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
| order by TimeGenerated asc nulls last ```
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
| summarize count() by event_class_s, event_subclass_s, user_s, ip_s ```
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | summarize count() by Resource, bin(TimeGenerated, 5m)
| render timechart ```
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics | where Category == 'MySqlAuditLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s
| order by TimeGenerated asc nulls last ```
mysql Concepts Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/concepts-slow-query-logs.md
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
```Kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
| where query_time_d > 10 ```
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
```Kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
| order by query_time_d desc | take 5 ```
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
```Kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by LogicalServerName_s
+ | project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
+ | summarize count(), min(query_time_d), max(query_time_d), avg(query_time_d), stdev(query_time_d), percentile(query_time_d, 95) by Resource
``` - Graph the slow query distribution on a particular server ```Kusto AzureDiagnostics
- | where LogicalServerName_s == '<your server name>'
+ | where Resource == '<your server name>'
| where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
- | summarize count() by LogicalServerName_s, bin(TimeGenerated, 5m)
+ | project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
+ | summarize count() by Resource , bin(TimeGenerated, 5m)
| render timechart ```
Once your slow query logs are piped to Azure Monitor Logs through Diagnostic Log
```Kusto AzureDiagnostics | where Category == 'MySqlSlowLogs'
- | project TimeGenerated, LogicalServerName_s, event_class_s, start_time_t , query_time_d, sql_text_s
+ | project TimeGenerated, Resource , event_class_s, start_time_t , query_time_d, sql_text_s
| where query_time_d > 10 ```
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/connect-nodejs.md
+
+ Title: 'Quickstart: Connect using Node.js - Azure Database for MySQL - Flexible Server'
+description: This quickstart provides several Node.js code samples you can use to connect and query data from Azure Database for MySQL - Flexible Server.
++++
+ms.devlang: javascript
+ Last updated : 01/27/2022+
+# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL - Flexible Server
++
+In this quickstart, you connect to an Azure Database for MySQL - Flexible Server by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
+
+This topic assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL - Flexible Server.
+
+## Prerequisites
+
+This quickstart uses the resources created in either of these guides as a starting point:
+
+- [Create an Azure Database for MySQL Flexible Server using Azure portal](./quickstart-create-server-portal.md)
+- [Create an Azure Database for MySQL Flexible Server using Azure CLI](./quickstart-create-server-cli.md)
+
+> [!IMPORTANT]
+> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-portal.md) or [Azure CLI](./how-to-manage-firewall-cli.md)
+
+## Install Node.js and the MySQL connector
+
+Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql](https://www.npmjs.com/package/mysql) package and its dependencies into your project folder.
+
+### Windows
+
+1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
+2. Make a local project folder such as `nodejsmysql`.
+3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\`
+4. Run the NPM tool to install the mysql library into the project folder.
+
+ ```cmd
+ cd c:\nodejsmysql\
+ "C:\Program Files\nodejs\npm" install mysql
+ "C:\Program Files\nodejs\npm" list
+ ```
+
+5. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+
+### Linux (Ubuntu)
+
+1. Run the following commands to install **Node.js** and **npm** the package manager for Node.js.
+
+ ```bash
+ # Using Ubuntu
+ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
+ sudo apt-get install -y nodejs
+
+ # Using Debian, as root
+ curl -sL https://deb.nodesource.com/setup_14.x | bash -
+ apt-get install -y nodejs
+ ```
+
+2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+
+ ```bash
+ mkdir nodejsmysql
+ cd nodejsmysql
+ npm install --save mysql
+ npm list
+ ```
+3. Verify the installation by checking npm list output text. The version number may vary as new patches are released.
+
+### macOS
+
+1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your macOS installer.
+
+2. Run the following commands to create a project folder `mysqlnodejs` and install the mysql package into that folder.
+
+ ```bash
+ mkdir nodejsmysql
+ cd nodejsmysql
+ npm install --save mysql
+ npm list
+ ```
+
+3. Verify the installation by checking the `npm list` output text. The version number may vary as new patches are released.
+
+## Get connection information
+
+Get the connection information needed to connect to the Azure Database for MySQL - Flexible Server. You need the fully qualified server name and sign in credentials.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
+4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
+
+## Running the code samples
+
+1. Paste the JavaScript code into new text files, and then save it into a project folder with file extension .js (such as C:\nodejsmysql\createtable.js or /home/username/nodejsmysql/createtable.js).
+1. Replace `host`, `user`, `password` and `database` config options in the code with the values that you specified when you created the MySQL flexible server and database.
+1. **Obtain SSL certificate**: To use encrypted connections with your client applications,you will need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
+ :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal.":::
+
+ Save the certificate file to your preferred location.
+
+1. In the `ssl` config option, replace the `ca-cert` filename with the path to this local file. This will allow the application to connect securely to the database over SSL.
+1. Open the command prompt or bash shell, and then change directory into your project folder `cd nodejsmysql`.
+1. To run the application, enter the node command followed by the file name, such as `node createtable.js`.
+1. On Windows, if the node application is not in your environment variable path, you may need to use the full path to launch the node application, such as `"C:\Program Files\nodejs\node.exe" createtable.js`
+
+## Connect, create table, and insert data
+
+Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) function is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) function is used to execute the SQL query against MySQL database.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'your_server_name.mysql.database.azure.com',
+ user: 'your_admin_name',
+ password: 'your_admin_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else
+ {
+ console.log("Connection established.");
+ queryDatabase();
+ }
+});
+
+function queryDatabase()
+{
+ conn.query('DROP TABLE IF EXISTS inventory;',
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Dropped inventory table if existed.');
+ }
+ )
+ conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Created inventory table.');
+ }
+ )
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Inserted ' + results.affectedRows + ' row(s).');
+ }
+ )
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 250],
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Inserted ' + results.affectedRows + ' row(s).');
+ }
+ )
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
+ function (err, results, fields) {
+ if (err) throw err;
+ console.log('Inserted ' + results.affectedRows + ' row(s).');
+ }
+ )
+ conn.end(function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Read data
+
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'your_server_name.mysql.database.azure.com',
+ user: 'your_admin_name',
+ password: 'your_admin_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ readData();
+ }
+ });
+
+function readData(){
+ conn.query('SELECT * FROM inventory',
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Selected ' + results.length + ' row(s).');
+ for (i = 0; i < results.length; i++) {
+ console.log('Row: ' + JSON.stringify(results[i]));
+ }
+ console.log('Done.');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Closing connection.')
+ });
+};
+```
+
+## Update data
+
+Use the following code to connect and update the data by using an **UPDATE** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
+
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'your_server_name.mysql.database.azure.com',
+ user: 'your_admin_name',
+ password: 'your_admin_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ updateData();
+ }
+ });
+
+function updateData(){
+ conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [75, 'banana'],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Updated ' + results.affectedRows + ' row(s).');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Delete data
+
+Use the following code to connect and delete data by using a **DELETE** SQL statement.
+
+The [mysql.createConnection()](https://github.com/mysqljs/mysql#establishing-connections) method is used to interface with the MySQL server. The [connect()](https://github.com/mysqljs/mysql#establishing-connections) method is used to establish the connection to the server. The [query()](https://github.com/mysqljs/mysql#performing-queries) method is used to execute the SQL query against MySQL database.
++
+```javascript
+const mysql = require('mysql');
+const fs = require('fs');
+
+var config =
+{
+ host: 'your_server_name.mysql.database.azure.com',
+ user: 'your_admin_name',
+ password: 'your_admin_password',
+ database: 'quickstartdb',
+ port: 3306,
+ ssl: {ca: fs.readFileSync("your_path_to_ca_cert_file_DigiCertGlobalRootCA.crt.pem")}
+};
+
+const conn = new mysql.createConnection(config);
+
+conn.connect(
+ function (err) {
+ if (err) {
+ console.log("!!! Cannot connect !!! Error:");
+ throw err;
+ }
+ else {
+ console.log("Connection established.");
+ deleteData();
+ }
+ });
+
+function deleteData(){
+ conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
+ function (err, results, fields) {
+ if (err) throw err;
+ else console.log('Deleted ' + results.affectedRows + ' row(s).');
+ })
+ conn.end(
+ function (err) {
+ if (err) throw err;
+ else console.log('Done.')
+ });
+};
+```
+
+## Clean up resources
+
+To clean up all resources used during this quickstart, delete the resource group using the following command:
+
+```azurecli
+az group delete \
+ --name $AZ_RESOURCE_GROUP \
+ --yes
+```
+
+## Next steps
+
+- [Encrypted connectivity using Transport Layer Security (TLS 1.2) in Azure Database for MySQL - Flexible Server](./how-to-connect-tls-ssl.md).
+- Learn more about [Networking in Azure Database for MySQL Flexible Server](./concepts-networking.md).
+- [Create and manage Azure Database for MySQL Flexible Server firewall rules using the Azure portal](./how-to-manage-firewall-portal.md).
+- [Create and manage Azure Database for MySQL Flexible Server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).
object-anchors Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/best-practices.md
We recommend trying some of these steps to get the best results.
- Confirm the nominal gravity direction that corresponds to the real world vertical orientation of the object. If the object's downward vertical/gravity is -Y, use ***(0, -1, 0)*** or ***(0, 0, -1)*** for -Z, and likewise for any other direction.
- - Make sure that the 3D model is encoded in one of the supported formats: `.glb`, `.gltf`, `.ply`, `.fbx`, `.obj`.
+ - Make sure that the 3D model is encoded in one of the supported formats: `.glb`, `.ply`, `.fbx`, `.obj`.
- Our model conversion service could take a long time to process a large, high LOD (level-of-detail) model. For efficacy, you can preprocess your 3D model to remove the interior faces.
object-anchors Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/faq.md
For more information, see [Azure Object Anchors overview](overview.md).
**Q: What are the supported CAD formats?**
-**A:** We currently support `fbx`, `ply`, `obj`, `glb`, and `gltf` file types. For more information, see [Asset Requirements](overview.md).
+**A:** We currently support `fbx`, `ply`, `obj`, and `glb` file types. For more information, see [Asset Requirements](overview.md).
**Q: What is the gravity direction and unit required by the model conversion service?**
object-anchors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/overview.md
The service will then convert your asset into an Azure Object Anchors model. Dow
Each dimension of an asset should be between 1 meter to 10 meters, and the file size should be less than 150 MB.
-The asset formats currently supported are: `fbx`, `ply`, `obj`, `glb`, and `gltf`.
+The asset formats currently supported are: `fbx`, `ply`, `obj`, and `glb`.
## Next steps
object-anchors Get Started Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/quickstarts/get-started-model-conversion.md
Now, you can go ahead and convert your 3D model.
| Field | Description | | | |
- | InputAssetPath | The absolute path to a 3D model on your local machine. Supported file formats are `fbx`, `ply`, `obj`, `glb`, and `gltf`. |
+ | InputAssetPath | The absolute path to a 3D model on your local machine. Supported file formats are `fbx`, `ply`, `obj`, and `glb`. |
| AssetDimensionUnit | The unit of measurement of your 3D model. All the supported units of measurement can be accessed using the `Azure.MixedReality.ObjectAnchors.Conversion.AssetLengthUnit` enumeration. | | Gravity | The direction of the gravity vector of the 3D model. This 3D vector gives the downward direction in the coordinate system of your model. For example if negative `y` represents the downward direction in the model's 3D space, this value would be `Vector3(0.0f, -1.0f, 0.0f)`. |
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
The application is a stateful application that stores information in an HTTP Ses
[!INCLUDE [aro-quota](includes/aro-quota.md)]
-1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed (for example Red Hat Enterprise Linux 8 (latest update) in the case of JBoss EAP).
+1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed.
1. Install a Java SE implementation (for example, [Oracle JDK 11](https://www.oracle.com/java/technologies/downloads/#java11)). 1. Install [Maven](https://maven.apache.org/download.cgi) 3.6.3 or higher. 1. Install [Docker](https://docs.docker.com/get-docker/) for your OS.
However, when you are targeting OpenShift, you might want to trim the capabiliti
Navigate to your demo application local repository and change the branch to `bootable-jar`: ```bash
-jboss-on-aro-jakartaee (main) $ git checkout bootable-jar
+$ git checkout bootable-jar
Switched to branch 'bootable-jar'
-jboss-on-aro-jakartaee (bootable-jar) $
+$
``` Let's do a quick review about what we have changed:
Follow the next steps to build and run the application locally.
1. Build the Bootable JAR. When we are building the Bootable JAR, we need to specify the database driver version we want to use: ```bash
- jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_DRIVER_VERSION=7.4.1.jre11 \
+ $ MSSQLSERVER_DRIVER_VERSION=7.4.1.jre11 \
mvn clean package ``` 1. Launch the Bootable JAR by using the following command. When we are launching the application, we need to pass the required environment variables to configure the data source: ```bash
- jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_USER=SA \
+ $ MSSQLSERVER_USER=SA \
MSSQLSERVER_PASSWORD=Passw0rd! \ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \ MSSQLSERVER_DATABASE=todos_db \
Follow the next steps to build and run the application locally.
1. (Optional) If you want to verify the clustering capabilities, you can also launch more instances of the same application by passing to the Bootable JAR the `jboss.node.name` argument and, to avoid conflicts with the port numbers, shifting the port numbers by using `jboss.socket.binding.port-offset`. For example, to launch a second instance that will represent a new pod on OpenShift, you can execute the following command in a new terminal window: ```bash
- jboss-on-aro-jakartaee (bootable-jar) $ MSSQLSERVER_USER=SA \
+ $ MSSQLSERVER_USER=SA \
MSSQLSERVER_PASSWORD=Passw0rd! \ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \ MSSQLSERVER_DATABASE=todos_db \
Follow the next steps to build and run the application locally.
1. Check the application health endpoints (live and ready). These endpoints will be used by OpenShift to verify when your pod is live and ready to receive user requests: ```bash
- jboss-on-aro-jakartaee (bootable-jar) $ curl http://localhost:9990/health/live
+ $ curl http://localhost:9990/health/live
{"status":"UP","checks":[{"name":"SuccessfulCheck","status":"UP"}]}
- jboss-on-aro-jakartaee (bootable-jar) $ curl http://localhost:9990/health/ready
+ $ curl http://localhost:9990/health/ready
{"status":"UP","checks":[{"name":"deployments-status","status":"UP","data":{"todo-list.war":"OK"}},{"name":"server-state","status":"UP","data":{"value":"running"}},{"name":"boot-errors","status":"UP"},{"name":"DBConnectionHealthCheck","status":"UP"}]} ```
To deploy the application, we are going to use the JBoss EAP Helm Charts already
Navigate to your demo application local repository and change the current branch to `bootable-jar-openshift`: ```bash
-jboss-on-aro-jakartaee (bootable-jar) $ git checkout bootable-jar-openshift
+$ git checkout bootable-jar-openshift
Switched to branch 'bootable-jar-openshift'
-jboss-on-aro-jakartaee (bootable-jar-openshift) $
+$
``` Let's do a quick review about what we have changed:
This file expects the presence of an OpenShift Secret object named `mssqlserver-
1. To create the Secret object with the information relative to the database, execute the following command on the `eap-demo` project created before at the pre-requisite steps section: ```bash
- jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc create secret generic mssqlserver-secret \
+ $ oc create secret generic mssqlserver-secret \
--from-literal db-password=Passw0rd! \ --from-literal db-user=sa \ --from-literal db-name=todos_db
This file expects the presence of an OpenShift Secret object named `mssqlserver-
1. Deploy the database server by executing the following: ```bash
- jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc apply -f ./deployment/msqlserver/mssqlserver.yaml
+ $ oc apply -f ./deployment/msqlserver/mssqlserver.yaml
service/mssqlserver created deploymentconfig.apps.openshift.io/mssqlserver created persistentvolumeclaim/mssqlserver-pvc created
This file expects the presence of an OpenShift Secret object named `mssqlserver-
1. Monitor the status of the pods and wait until the database server is running: ```bash
- jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc get pods -w
+ $ oc get pods -w
NAME READY STATUS RESTARTS AGE mssqlserver-1-deploy 0/1 Completed 0 34s mssqlserver-1-gw7qw 1/1 Running 0 31s
This file expects the presence of an OpenShift Secret object named `mssqlserver-
1. Connect to the database pod and create the database `todos_db`: ```bash
- jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc rsh mssqlserver-1-gw7qw
+ $ oc rsh mssqlserver-1-gw7qw
sh-4.4$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw0rd!' 1> CREATE DATABASE todos_db 2> GO
Before deploying the application, let's create the expected Secret object that w
1. Execute the following to create the OpenShift secret object that will hold the application configuration: ```bash
- jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc create secret generic todo-list-secret \
+ $ oc create secret generic todo-list-secret \
--from-literal app-driver-version=7.4.1.jre11 \ --from-literal app-ds-jndi=java:/comp/env/jdbc/mssqlds \ --from-literal app-cluster-password=mut2UTG6gDwNDcVW
Select **Uninstall Helm Release** to remove the application. Notice that the sec
Execute the following command if you want to delete the secret that holds the application configuration: ```bash
-jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete secrets/todo-list-secret
+$ oc delete secrets/todo-list-secret
secret "todo-list-secret" deleted ```
secret "todo-list-secret" deleted
If you want to delete the database and the related objects, execute the following command: ```bash
-jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete all -l app=mssql2019
+$ oc delete all -l app=mssql2019
replicationcontroller "mssqlserver-1" deleted service "mssqlserver" deleted deploymentconfig.apps.openshift.io "mssqlserver" deleted
-jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete secrets/mssqlserver-secret
+$ oc delete secrets/mssqlserver-secret
secret "mssqlserver-secret" deleted ```
secret "mssqlserver-secret" deleted
You can also delete all the configuration created for this demo by deleting the `eap-demo` project. To do so, execute the following: ```bash
-jboss-on-aro-jakartaee (bootable-jar-openshift) $ oc delete project eap-demo
+$ oc delete project eap-demo
project.project.openshift.io "eap-demo" deleted ```
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/overview.md
The service runs the community version of PostgreSQL. This allows full applicati
- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like pg_dump and pg_restore can provide fastest way to migrate. See [Migrate using dump and restore](../howto-migrate-using-dump-and-restore.md) for details. - **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to flexible server with minimal downtime, Azure Database Migration Service can be leveraged. See [DMS via portal](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../../dms/tutorial-postgresql-azure-postgresql-online.md). You can migrate from your Azure Database for PostgreSQL - Single Server to Flexible Server. See this [DMS article](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for details.
+## Frequently asked questions
+
+ Will Flexible Server replace Single Server or Will Single Server be retired soon?
+
+We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you'll receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
+ ## Contacts
-For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address is not a technical support alias.
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
In addition, consider the following points of contact as appropriate:
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/release-notes.md
In addition, consider the following points of contact as appropriate:
- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). - To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. - To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597976-azure-database-for-postgresql).+
+## Frequently Asked Questions
+
+ Will Flexible Server replace Single Server or Will Single Server be retired soon?
+
+We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
## Next steps
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/overview-single-server.md
The service runs community version of PostgreSQL. This allows full application c
- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like Pg_dump and Pg_restore can provide fastest way to migrate. See [Migrate using dump and restore](./howto-migrate-using-dump-and-restore.md) for details. - **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to single server with minimal downtime, Azure Database Migration Service can be leveraged. See [DMS via portal](../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../dms/tutorial-postgresql-azure-postgresql-online.md).
+## Frequently Asked Questions
+
+ Will Flexible Server replace Single Server or Will Single Server be retired soon?
+
+We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
++ ## Contacts For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). This email address is not a technical support alias.
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-link-service-overview.md
Custom TLV details:
> [!NOTE] > Service provider is responsible for making sure that the service behind the standard load balancer is configured to parse the proxy protocol header as per the [specification](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) when proxy protocol is enabled on private link service. The request will fail if proxy protocol setting is enabled on private link service but service provider's service is not configured to parse the header. Similarly, the request will fail if the service provider's service is expecting a proxy protocol header while the setting is not enabled on the private link service. Once proxy protocol setting is enabled, proxy protocol header will also be included in HTTP/TCP health probes from host to the backend virtual machines, even though there will be no client information in the header.
+The matching `LINKID` that is part of the PROXYv2 (TLV) protocol can be found at the `PrivateEndpointConnection` as property `linkIdentifier`, see
+[Private Link Services API](/../../../rest/api/virtualnetwork/private-link-services/get-private-endpoint-connection#privateendpointconnection) for more details.
+ ## Limitations The following are the known limitations when using the Private Link service:
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-lineage-user-guide.md
Databases & storage solutions such as Oracle, Teradata, and SAP have query engin
|**Category**| **Data source** | ||| |Database| [Cassandra](register-scan-cassandra-source.md)|
-|| [DB2](register-scan-db2.md) |
+|| [Db2](register-scan-db2.md) |
|| [Google BigQuery](register-scan-google-bigquery-source.md)| || [Hive Metastore Database](register-scan-hive-metastore-source.md) | || [MySQL](register-scan-mysql.md) |
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-network.md
Previously updated : 01/26/2022 Last updated : 01/27/2022 # Azure Purview network architecture and best practices
For performance and cost optimization, we highly recommended deploying one or mo
### DNS configuration with private endpoints
-#### Name resolution for single Azure Purview account
-
-If you have one Azure Purview account in your tenant, and you have enabled private endpoints for account, portal and ingestion, you can use any of [the supported scenarios](catalog-private-link-name-resolution.md#deployment-options) for name resolution in your network.
- #### Name resolution for multiple Azure Purview accounts It is recommended to follow these recommendations, if your organization needs to deploy and maintain multiple Azure Purview accounts using private endpoints: 1. Deploy at least one _account_ private endpoint for each Azure Purview account.
-2. Deploy at least one _ingestion_ private endpoint for each Azure Purview account.
-3. Deploy one _portal_ private endpoint for one of the Azure Purview accounts in your Azure environments. Create one DNS A record for _portal_ private endpoint to resolve `web.purview.azure.com`.
-
+2. Deploy at least one set of _ingestion_ private endpoints for each Azure Purview account.
+3. Deploy one _portal_ private endpoint for one of the Azure Purview accounts in your Azure environments. Create one DNS A record for _portal_ private endpoint to resolve `web.purview.azure.com`. The _portal_ private endpoint can be used by all purview accounts in the same Azure virtual network or virtual networks connected through VNet peering.
+ :::image type="content" source="media/concept-best-practices/network-pe-dns.png" alt-text="Screenshot that shows how to handle private endpoints and DNS records for multiple Azure Purview accounts."lightbox="media/concept-best-practices/network-pe-dns.png":::
+This scenario also applies if multiple Azure Purview accounts are deployed across multiple subscriptions and multiple VNets that are connected through VNet peering. _Portal_ private endpoint mainly renders static assets related to Azure Purview Studio, thus, it is independent of Azure Purview account, therefore, only one _portal_ private endpoint is needed to visit all Azure Purview accounts in the Azure environment if VNets are connected.
++ > [!NOTE]
-> _Portal_ private endpoint mainly renders static assets related to Azure Purview Studio, thus, it is independent of Azure Purview account, therefore, only one _portal_ private endpoint is needed to visit all Azure Purview accounts in the Azure environment.
-You may need to deploy separate _portal_ private endpoints for each Azure Purview account in the scenarios where Azure Purview accounts are deployed in isolated network segmentations.
-> Azure Purview _portal_ is static contents for all customers without any customer information. Optionally, you can use public network to launch `web.purview.azure.com` if your end users are allowed to launch the Internet.
+> You may need to deploy separate _portal_ private endpoints for each Azure Purview account in the scenarios where Azure Purview accounts are deployed in isolated network segmentations.
+> Azure Purview _portal_ is static contents for all customers without any customer information. Optionally, you can use public network, (without portal private endpoint) to launch `web.purview.azure.com` if your end users are allowed to launch the Internet.
## Option 3: Use both private and public endpoints
purview Deployment Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/deployment-best-practices.md
Another important aspect to include in your production process is how classifica
In Azure Purview, there are several areas where the Catalog Administrators need to ensure consistency and maintenance best practices over its life cycle: * **Data assets** ΓÇô Data sources will need to be rescanned across environments. ItΓÇÖs not recommended to scan only in development and then regenerate them using APIs in Production. The main reason is that the Azure Purview scanners do a lot more ΓÇ£wiringΓÇ¥ behind the scenes on the data assets, which could be complex to move them to a different Azure Purview instance. ItΓÇÖs much easier to just add the same data source in production and scan the sources again. The general best practice is to have documentation of all scans, connections, and authentication mechanisms being used.
-* **Scan rule sets** ΓÇô This is your collection of rules assigned to specific scan such as file type and classifications to detect. If you donΓÇÖt have that many scan rule sets, itΓÇÖs possible to just re-create them manually again via Production. This will require an internal process and good documentation. However, if you rule sets change on the daily or weekly basis, this could be addressed by exploring the REST API route.
+* **Scan rule sets** ΓÇô This is your collection of rules assigned to specific scan such as file type and classifications to detect. If you donΓÇÖt have that many scan rule sets, itΓÇÖs possible to just re-create them manually again via Production. This will require an internal process and good documentation. However, if your rule sets change on a daily or weekly basis, this could be addressed by exploring the REST API route.
* **Custom classifications** ΓÇô Your classifications may not also change on a regular basis. During the initial phase of deployment, it may take some time to understand various requirements to come up with custom classifications. However, once settled, this will require little change. So the recommendation here is to manually migrate any custom classifications over or use the REST API. * **Glossary** ΓÇô ItΓÇÖs possible to export and import glossary terms via the UX. For automation scenarios, you can also use the REST API. * **Resource set pattern policies** ΓÇô This functionality is very advanced for any typical organizations to apply. In some cases, your Azure Data Lake Storage has folder naming conventions and specific structure that may cause problems for Azure Purview to generate the resource set. Your business unit may also want to change the resource set construction with additional customizations to fit the business needs. For this scenario, itΓÇÖs best to keep track of all changes via REST API, and document the changes through external versioning platform.
Additional hardening steps can be taken:
* Use REST APIs to export critical metadata and properties for backup and recovery * Use workflow to automate ticketing and eventing to avoid human errors
+## Moving tenants
+
+If your Azure Subscription moves tenants while you have an Azure Purview account, there are some steps you should follow after the move.
+
+Currently your Azure Purview account's system assigned and user assigned managed identities will be cleared during the move to the new tenant. This is because your Azure tenant houses all authentication information, so these need to be updated for your Azure Purview account in the new tenant.
+
+After the move, follow the below steps to clear the old identities, and create new ones:
+
+1. If you're running locally, sign in to Azure through the Azure CLI.
+
+ ```azurecli-interactive
+ az login
+ ```
+ Alternatively, you can use the [Azure Cloud Shell](../cloud-shell/overview.md) in the Azure Portal.
+ Direct browser link: [https://shell.azure.com](https://shell.azure.com).
+
+1. Obtain an access token by using [az account get-access-token](/cli/azure/account#az_account_get_access_token).
+ ```azurecli-interactive
+ az account get-access-token
+ ```
+
+1. Run the following bash command to disable all managed identities (user and system assigned managed identities):
+
+ > [!IMPORTANT]
+ > Be sure to replace these values in the below commands:
+ > - \<Subscription_Id>: Your Azure Subscription ID
+ > - \<Resource_Group_Name>: Name of the resource group where your Azure Purview account is housed.
+ > - \<Account_Name>: Your Azure Purview account name
+ > - \<Access_Token>: The token from the first two steps.
+
+ ```bash
+ curl 'https://management.azure.com/subscriptions/<Subscription_Id>/resourceGroups/<Resource_Group_Name>/providers/Microsoft.Purview/accounts/<Account_Name>?api-version=2021-07-01' -X PATCH -d'{"identity":{"type":"None"}}' -H "Content-Type: application/json" -H "Authorization:Bearer <Access_Token>"
+ ```
+
+1. To enable your new system managed assigned identity (SAMI), run the following bash command:
+
+ ```bash
+ curl 'https://management.azure.com/subscriptions/<Subscription_Id>/resourceGroups/<Resource_Group_Name>/providers/Microsoft.Purview/accounts/<Account_Name>?api-version=2021-07-01' -X PATCH -d '{"identity":{"type":"SystemAssigned"}}' -H "Content-Type: application/json" -H "Authorization:Bearer <Access_Token>"
+ ```
+
+1. If you had a user assigned managed identity (UAMI), to enable one on your new tenant, register your UAMI in Azure Purview as you did originally by following [the steps from the manage credentials article](manage-credentials.md#create-a-user-assigned-managed-identity).
+ ## Next steps - [Collections best practices](concept-best-practices-collections.md)
purview Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/purview-connector-overview.md
Azure Purview supports the following data stores. Select each data store to lear
|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| |Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | || [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No|
-|| [DB2](register-scan-db2.md) | [Yes](register-scan-db2.md#register) | No | [Yes](register-scan-db2.md#lineage) | No |
+|| [Db2](register-scan-db2.md) | [Yes](register-scan-db2.md#register) | No | [Yes](register-scan-db2.md#lineage) | No |
|| [Google BigQuery](register-scan-google-bigquery-source.md)| [Yes](register-scan-google-bigquery-source.md#register)| No | [Yes](register-scan-google-bigquery-source.md#lineage)| No| || [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No| || [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#scan) | No |
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-db2.md
Title: Connect to and manage DB2
-description: This guide describes how to connect to DB2 in Azure Purview, and use Azure Purview's features to scan and manage your DB2 source.
+ Title: Connect to and manage Db2
+description: This guide describes how to connect to Db2 in Azure Purview, and use Azure Purview's features to scan and manage your Db2 source.
Last updated 01/20/2022
-# Connect to and manage DB2 in Azure Purview (Preview)
+# Connect to and manage Db2 in Azure Purview (Preview)
-This article outlines how to register DB2, and how to authenticate and interact with DB2 in Azure Purview. For more information about Azure Purview, read the [introductory article](overview.md).
+This article outlines how to register Db2, and how to authenticate and interact with Db2 in Azure Purview. For more information about Azure Purview, read the [introductory article](overview.md).
> [!IMPORTANT]
-> DB2 as a source is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Db2 as a source is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Supported capabilities
This article outlines how to register DB2, and how to authenticate and interact
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)|
-The supported IBM DB2 versions are DB2 for LUW 9.7 to 11.x. DB2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
+The supported IBM Db2 versions are Db2 for LUW 9.7 to 11.x. Db2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
-When scanning IBM DB2 source, Azure Purview supports:
+When scanning IBM Db2 source, Azure Purview supports:
- Extracting technical metadata including:
When scanning IBM DB2 source, Azure Purview supports:
- Fetching static lineage on assets relationships among tables and views.
-When setting up scan, you can choose to scan an entire DB2 database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
+When setting up scan, you can choose to scan an entire Db2 database, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
## Prerequisites
When setting up scan, you can choose to scan an entire DB2 database, or scope th
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Manually download a DB2 JDBC driver from [here](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) onto your virtual machine where self-hosted integration runtime is running.
+* Manually download a Db2 JDBC driver from [here](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) onto your virtual machine where self-hosted integration runtime is running.
> [!Note] > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
-* The DB2 user must have the CONNECT permission. Azure Purview connects to the syscat tables in IBM DB2 environment when importing metadata.
+* The Db2 user must have the CONNECT permission. Azure Purview connects to the syscat tables in IBM Db2 environment when importing metadata.
## Register
-This section describes how to register DB2 in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Db2 in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register
-To register a new DB2 source in your data catalog, do the following:
+To register a new Db2 source in your data catalog, do the following:
1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/). 1. Select **Data Map** on the left navigation. 1. Select **Register**
-1. On Register sources, select **DB2**. Select **Continue**.
+1. On Register sources, select **Db2**. Select **Continue**.
-On the **Register sources (DB2)** screen, do the following:
+On the **Register sources (Db2)** screen, do the following:
1. Enter a **Name** that the data source will be listed within the Catalog.
-1. Enter the **Server** name to connect to a DB2 source. This can either be:
+1. Enter the **Server** name to connect to a Db2 source. This can either be:
* A host name used to connect to the database server. For example: `MyDatabaseServer.com` * An IP address. For example: `192.169.1.2` * Its fully qualified JDBC connection string. For example:
On the **Register sources (DB2)** screen, do the following:
jdbc:db2://COMPUTER_NAME_OR_IP:PORT/DATABASE_NAME ```
-1. Enter the **Port** used to connect to the database server (446 by default for DB2).
+1. Enter the **Port** used to connect to the database server (446 by default for Db2).
1. Select a collection or create a new one (Optional)
On the **Register sources (DB2)** screen, do the following:
## Scan
-Follow the steps below to scan DB2 to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
+Follow the steps below to scan Db2 to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
### Authentication for a scan
-The supported authentication type for a DB2 source is **Basic authentication**.
+The supported authentication type for a Db2 source is **Basic authentication**.
### Create and run scan
To create and run a new scan, do the following:
1. Navigate to **Sources**.
-1. Select the registered DB2 source.
+1. Select the registered Db2 source.
1. Select **+ New scan**.
To create and run a new scan, do the following:
> [!Note] > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
- 1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of DB2 source to be scanned.
+ 1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Db2 source to be scanned.
> [!Note] > As a thumb rule, please provide 1GB memory for every 1000 tables
- :::image type="content" source="media/register-scan-db2/scan.png" alt-text="scan DB2" border="true":::
+ :::image type="content" source="media/register-scan-db2/scan.png" alt-text="scan Db2" border="true":::
1. Select **Continue**.
To create and run a new scan, do the following:
## Lineage
-After scanning your DB2 source, you can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view the asset details.
+After scanning your Db2 source, you can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view the asset details.
-Go to the asset -> lineage tab, you can see the asset relationship when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported DB2 lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
+Go to the asset -> lineage tab, you can see the asset relationship when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Db2 lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
## Next steps
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/route-server-faq.md
Previously updated : 11/02/2021 Last updated : 01/26/2022
No. Azure Route Server only exchanges BGP routes with your NVA and then propagat
### Why does Azure Route Server require a public IP address?
-Azure Router Server needs to ensure connectivity to the backend service that manages the Route Server configuration, as such a public IP address is required.
+Azure Router Server needs to ensure connectivity to the backend service that manages the Route Server configuration, as such a public IP address is required. This public IP address doesn't constitute a security exposure of your virtual network.
### Does Azure Route Server support IPv6?
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/azure-domains.md
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure BizTalk Services](https://azure.microsoft.com/pricing/details/biztalk-services/) (retired)|*.biztalk.windows.net| |[Azure Blob storage](../../storage/blobs/index.yml)|*.blob.core.windows.net| |[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.net|
-|[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md)|*.cloudapp.azure.com|
+|[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.azure.com|
|[Azure Container Registry](https://azure.microsoft.com/services/container-registry/)|*.azurecr.io| |Azure Container Service (ACS) (deprecated)|*.azurecontainer.io| |[Azure Content Delivery Network (CDN)](https://azure.microsoft.com/services/cdn/)|*.vo.msecnd.net|
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Table Storage](../../storage/tables/table-storage-overview.md)|*.table.core.windows.net| |[Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)|*.trafficmanager.net| |Azure Websites|*.azurewebsites.net|
-|[Visual Studio Codespaces](https://visualstudio.microsoft.com/services/visual-studio-codespaces/)|*.visualstudio.com|
+|[Visual Studio Codespaces](https://visualstudio.microsoft.com/services/visual-studio-codespaces/)|*.visualstudio.com|
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-windows-microsoft-services.md
See below how to create data collection rules.
- You must have read and write permissions on the Microsoft Sentinel workspace. -- To collect events from any system that is not an Azure virtual machine, the system must have [**Azure Arc**](../azure-monitor/agents/azure-monitor-agent-install.md) installed and enabled *before* you enable the Azure Monitor Agent-based connector.
+- To collect events from any system that is not an Azure virtual machine, the system must have [**Azure Arc**](../azure-monitor/agents/azure-monitor-agent-manage.md) installed and enabled *before* you enable the Azure Monitor Agent-based connector.
This includes:
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Apache Log4j Vulnerability Detection** | Analytics rules, hunting queries | Application, Security - Threat Protection, Security - Vulnerability Management | Microsoft| |**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), workbook, analytics rules, hunting queries |Security - Insider threat | Microsoft|
-| **Microsoft MITRE ATT&CK solution for Cloud**| Workbooks, analytics rules, hunting queries|Security - Threat protection, Security - Others |Microsoft |
| **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft | |**Zero Trust** (TIC3.0) |[Workbooks](https://techcommunity.microsoft.com/t5/public-sector-blog/announcing-the-azure-sentinel-zero-trust-tic3-0-workbook/ba-p/2313761) |Identity, Security - Others |Microsoft | | | | | |
spring-cloud How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-move-across-regions.md
+
+ Title: How to move an Azure Spring Cloud service instance to another region
+description: Describes how to move an Azure Spring Cloud service instance to another region
++++ Last updated : 01/27/2022+++
+# Move an Azure Spring Cloud service instance to another region
+
+This article shows you how to move your Azure Spring Cloud service instance to another region. Moving your instance is useful, for example, as part of a disaster recovery plan or to create a duplicate testing environment.
+
+You can't move an Azure Spring Cloud instance from one region to another directly, but you can use an Azure Resource Manager template (ARM template) to deploy to a new region. For more information about using Azure Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+
+Before you move your service instance, you should be aware of the following limitations:
+
+- Different feature sets are supported by different pricing tiers (SKUs). If you change the SKU, you may need to change the template to include only features supported by the target SKU.
+- You might not be able to move all sub-resources in Azure Spring Cloud using the template. Your move may require extra setup after the template is deployed. For more information, see the [Configure the new Azure Spring Cloud service instance](#configure-the-new-azure-spring-cloud-service-instance) section.
+- When you move a virtual network (VNet) instance (see [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md)), you'll need to create new network resources.
+
+## Prerequisites
+
+- A running Azure Spring Cloud instance.
+- A target region that supports Azure Spring Cloud and its related features.
+- [Azure CLI](/cli/azure/install-azure-cli) if you aren't using the Azure portal.
+
+## Export the template
+
+### [Portal](#tab/azure-portal)
+
+First, use the following steps to export the template:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **All resources** in the left menu, then select your Azure Spring Cloud instance.
+1. Under **Automation**, select **Export template**.
+1. Select **Download** on the **Export template** pane.
+1. Locate the *.zip* file, unzip it, and get the *template.json* file. This file contains the resource template.
+
+### [Azure CLI](#tab/azure-cli)
+
+First, use the following command to export the template:
+
+```azurecli
+az login
+az account set --subscription <resource-subscription-id>
+az group export --resource-group <resource-group> --resource-ids <resource-id>
+```
+++
+## Modify the template
+
+Next, use the following steps to modify the *template.json* file. In the examples shown here, the new Azure Spring Cloud instance name is *new-service-name*, and the previous instance name is *old-service-name*.
+
+1. Change all `name` instances in the template from *old-service-name* to *new-service-name*, as shown in the following example:
+
+ ```json
+ {
+ "type": "Microsoft.AppPlatform/Spring",
+ "apiVersion": "{api-version}",
+ "_comment": "the following line was changed from 'old-service-name'",
+ "name": "[parameters('new-service-name')]",
+ ….
+ }
+ ```
+
+1. Change the `location` instances in the template to the new target location, as shown in the following example:
+
+ ```json
+ {
+ "type": "Microsoft.AppPlatform/Spring",
+ "apiVersion": "{api-version}",
+ "name": "[parameters('new_service_name')]",
+ "_comment": "the following line was changed from 'old-region'",
+ "location": "{new-region}",
+ …..
+ }
+ ```
+
+1. If the instance you're moving is a VNet instance, you'll need to update the target VNet resource `parameters` instances in the template, as shown in the following example:
+
+ ```json
+ "parameters": {
+ …
+ "virtualNetworks_service_vnet_externalid": {
+ "_comment": "the following line was changed from 'old-vnet-resource-id'",
+ "defaultValue": "{new-vnet-resource-id}",
+ "type": "String"
+ }
+ },
+ ```
+
+ Be sure the subnets `serviceRuntimeSubnetId` and `appSubnetId` (defined in the service `networkProfile`) exist.
+
+ ```json
+ {
+ "type": "Microsoft.AppPlatform/Spring",
+ "apiVersion": "{api-version}",
+ "name": "[parameters('Spring_new_service_name')]",
+ …
+ "properties": {
+ "networkProfile": {
+ "serviceRuntimeSubnetId": "[concat(parameters('virtualNetworks_service_vnet_externalid'), '/subnets/apps-subnet')]",
+ "appSubnetId": "[concat(parameters('virtualNetworks_service_vnet_externalid'), '/subnets/service-runtime-subnet')]",
+ }
+ }
+ }
+ ```
+
+1. If any custom domain resources are configured, you need to create the CNAME records as described in [Tutorial: Map an existing custom domain to Azure Spring Cloud](tutorial-custom-domain.md). Be sure the record name is expected for the new service name.
+
+1. Change all `relativePath` instances in the template `properties` for all app resources to `<default>`, as shown in the following example:
+
+ ```json
+ {
+ "type": "Microsoft.AppPlatform/Spring/apps/deployments",
+ "apiVersion": "{api-version}",
+ "name": "[concat(parameters('Spring_new_service_name'), '/api-gateway/default')]",
+ …
+ "properties": {
+ "active": true,
+ "source": {
+ "type": "Jar",
+ "_comment": "the following line was changed to 'default'",
+ "relativePath": "<default>"
+ },
+ …
+ }
+ }
+ ```
+
+ After the app is created, it uses a default banner application. You'LL need to deploy the JAR files again using the Azure CLI. For more information, see the [Configure the new Azure Spring Cloud service instance](#configure-the-new-azure-spring-cloud-service-instance) section below.
+
+1. If service binding was used and you want to import it to the new service instance, add the `key` property for the target bound resource. In the following example, a bound MySQL database would be included:
+
+ ```json
+ {
+ "type": "Microsoft.AppPlatform/Spring/apps/bindings",
+ "apiVersion": "{api-version}",
+ "name": "[concat(parameters('Spring_new_service_name'), '/api-gateway/mysql')]",
+ …
+ "_comment": "the following line imports a mysql binding",
+ "properties": {
+ "resourceId": "[parameters('servers_test_mysql_name_externalid')]",
+ "key": "{mysql-password}",
+ "bindingParameters": {
+ "databaseName": "mysql",
+ "username": "{mysql-user-name}"
+ }
+ }
+ }
+ ```
+
+## Deploy the template
+
+### [Portal](#tab/azure-portal)
+
+After you modify the template, use the following steps to deploy the template and create the new resource.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the top search box, search for *Deploy a custom template*.
+
+ :::image type="content" source="media/how-to-move-across-regions/search-deploy-template.png" alt-text="Azure portal screenshot showing search results." lightbox="media/how-to-move-across-regions/search-deploy-template.png" border="true":::
+
+1. Under **Services**, select **Deploy a custom template**.
+1. Go to the **Select a template** tab, then select **Build your own template in the editor**.
+1. In the template editor, paste in the *template.json* file you modified earlier, then select **Save**.
+1. In the **Basics** tab, fill in the following information:
+
+ - The target subscription.
+ - The target resource group.
+ - The target region.
+ - Any other parameters required for the template.
+
+ :::image type="content" source="media/how-to-move-across-regions/deploy-template.png" alt-text="Azure portal screenshot showing 'Custom deployment' pane.":::
+
+1. Select **Review + create** to create the target service instance.
+1. Wait until the template has deployed successfully. If the deployment fails, select **Deployment details** to view the failure reason, then update the template or configurations accordingly.
+
+### [Azure CLI](#tab/azure-cli)
+
+After you modify the template, use the following command to deploy the custom template and create the new resource.
+
+```azurecli
+az login
+az account set --subscription <resource-subscription-id>
+az deployment group create \
+ --name <custom-deployment-name> \
+ --resource-group <resource-group> \
+ --template-file <path-to-template> \
+ --parameters <param-name-1>=<param-value-1>
+```
+
+Wait until the template has deployed successfully. If the deployment fails, view the deployment details with the command `az deployment group list`, then update the template or configurations accordingly.
+++
+## Configure the new Azure Spring Cloud service instance
+
+Some features aren't exported to the template, or can't be imported with a template. You must manually set up some Azure Spring Cloud items on the new instance after the template deployment completes successfully. The following guidelines describe these requirements:
+
+- The JAR files for the previous service aren't deployed directly to the new service instance. To deploy all apps, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md). If there's no active deployment configured automatically, you must configure a production deployment. For more information, see [Set up a staging environment in Azure Spring Cloud](how-to-staging-environment.md).
+- Config Server won't be imported automatically. To set up Config Server on your new instance, see [Set up a Spring Cloud Config Server instance for your service](how-to-config-server.md).
+- Managed identity will be created automatically for the new service instance, but the object ID will be different from the previous instance. For managed identity to work in the new service instance, follow the instructions in [How to enable system-assigned managed identity for applications in Azure Spring Cloud](how-to-enable-system-assigned-managed-identity.md).
+- For Monitoring -> Metrics, see [Metrics for Azure Spring Cloud](concept-metrics.md). To avoid mixing the data, we recommend that you create a new Log Analytics instance to collect the new data. You should also create a new instance for other monitoring configurations.
+- For Monitoring -> Diagnostic settings and logs, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md).
+- For Monitoring -> Application Insights, see [Application Insights Java In-Process Agent in Azure Spring Cloud](how-to-application-insights.md).
+
+## Next steps
+
+- [Quickstart: Build and deploy apps to Azure Spring Cloud](quickstart-deploy-apps.md)
+- [Quickstart: Set up Azure Spring Cloud Config Server](quickstart-setup-config-server.md)
+- [Quickstart: Set up a Log Analytics workspace](quickstart-setup-log-analytics.md)
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-containers-cli.md
+
+ Title: Manage blob containers using Azure CLI
+
+description: Learn how to manage Azure storage containers using Azure CLI
+++++ Last updated : 01/19/2022++++
+# Manage blob containers using Azure CLI
+
+Azure blob storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about blob storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
+
+The Azure CLI is Azure's cross-platform command-line experience for managing Azure resources. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it locally from the command line.
+
+In this how-to article, you learn to use the Azure CLI to work with container objects.
+
+## Prerequisites
+++
+- It's always a good idea to install the latest version of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+### Authorize access to Blob storage
+
+You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. Using Azure AD credentials is recommended, and this article's examples use Azure AD exclusively.
+
+Azure CLI commands for data operations against Blob storage support the `--auth-mode` parameter, which enables you to specify how to authorize a given operation. Set the `--auth-mode` parameter to `login` to authorize with Azure AD credentials. For more information, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+
+Run the `login` command to open a browser and connect to your Azure subscription.
+
+```azurecli-interactive
+az login
+```
+
+## Create a container
+
+To create a container with Azure CLI, call the [az storage container create](/cli/azure/storage/container#az_storage_container_create) command.The following example illustrates three options for the creation of blob containers with the `az storage container create` command. The first approach creates a single container, while the remaining two approaches use Bash scripting operations to automate container creation.
+
+To use this example, supply values for the variables and ensure that you've logged in. Remember to replace the placeholder values in brackets with your own values.
+
+```azurecli
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+containerName="demo-container-1"
+containerPrefix="demo-container-"
+
+# Approach 1: Create a container
+az storage container create \
+ --name $containerName \
+ --auth-mode login
+
+# Approach 2: Create containers with a loop
+for value in {2..4}
+do
+ az storage container create \
+ --name $containerPrefix$value \
+ --auth-mode login
+done
+
+# Approach 3: Create containers by splitting multiple values
+containerList="${containerPrefix}5 ${containerPrefix}6 ${containerPrefix}7"
+for container in $containerList
+do
+ az storage container create \
+ --name $container \
+ --auth-mode login
+done
+```
+
+## List containers
+
+Use the `az storage container list` command to retrieve a list of storage containers. To return a list of containers whose names begin with a given character string, pass the string as the `--prefix` parameter value.
+
+The `--num-results` parameter can be used to limit the number of containers returned by the request. Azure Storage limits the number of containers returned by a single listing operation to 5000. This limit ensures that manageable amounts of data are retrieved. If the number of containers returned exceeds either the `--num-results` value or the service limit, a continuation token is returned. This token allows you to use multiple requests to retrieve any number of containers.
+
+You can also use the `--query` parameter to execute a [JMESPath query](https://jmespath.org/) on the results of commands. JMESPath is a query language for JSON that allows you to select and modify data returned from CLI output. Queries are executed on the JSON output before it can be formatted. For more information, see [How to query Azure CLI command output using a JMESPath query](/cli/azure/query-azure-cli).
+
+The following example first lists the maximum number of containers (subject to the service limit). Next, it lists three containers whose names begin with the prefix *container-* by supplying values for the `--num-results` and `--prefix` parameters. Finally, a single container is listed by supplying a known container name to the `--prefix` parameter.
+
+Read more about the [az storage container list](/cli/azure/storage/container#az_storage_container_list).
+
+```azurecli-interactive
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+numResults="3"
+containerPrefix="demo-container-"
+containerName="demo-container-1"
+
+# Approach 1: List maximum containers
+az storage container list \
+ --auth-mode login
+
+# Approach 2: List a defined number of named containers
+az storage container list \
+ --prefix $containerPrefix \
+ --num-results $numResults \
+ --auth-mode login
+
+# Approach 3: List an individual container
+az storage container list \
+ --prefix $containerPrefix \
+ --query "[?name=='$containerName']" \
+ --auth-mode login
+```
+
+## Read container properties and metadata
+
+A container exposes both system properties and user-defined metadata. System properties exist on each blob storage resource. Some properties are read-only, while others can be read or set. Under the covers, some system properties map to certain standard HTTP headers.
+
+User-defined metadata consists of one or more name-value pairs that you specify for a blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.
+
+### Container properties
+
+To display the properties of a container with Azure CLI, call the [az storage container show](/cli/azure/storage/container#az_storage_container_show) command.
+
+In the following example, the first approach displays the properties of a single named container. Afterward, it retrieves all containers with the **demo-container-** prefix and iterates through them, listing their properties. Remember to replace the placeholder values with your own values.
+
+```azurecli-interactive
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+containerPrefix="demo-container-"
+containerName="demo-container-1"
+
+# Show a named container's properties
+az storage container show \
+ --name $containerName \
+ --auth-mode login
+
+# List several containers and show their properties
+containerList=$(az storage container list \
+ --query "[].name" \
+ --prefix $containerPrefix \
+ --auth-mode login \
+ --output tsv)
+for item in $containerList
+do
+ az storage container show \
+ --name $item \
+ --auth-mode login
+done
+```
+
+### Read and write container metadata
+
+Users that have many thousands of objects within their storage account can quickly locate specific containers based on their metadata. To read the metadata, you'll use the `az storage container metadata show` command. To update metadata, you'll need to call the `az storage container metadata update` command. The method only accepts space-separated key-value pairs. For more information, see the [az storage container metadata](/cli/azure/storage/container/metadata) documentation.
+
+The example below first updates a container's metadata and afterward retrieves the container's metadata.
+
+```azurecli-interactive
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+containerName = "demo-container-1"
+
+# Create metadata string
+metadata="key=value pie=delicious"
+
+# Update metadata
+az storage container metadata update \
+ --name $containerName \
+ --metadata $metadata \
+ --auth-mode login
+
+# Display metadata
+az storage container metadata show \
+ --name $containerName \
+ --auth-mode login
+```
+
+## Delete containers
+
+Depending on your use case, you can delete a single container or a group of containers with the `az storage container delete` command. When deleting a list of containers, you'll need to use conditional operations as shown in the examples below.
+
+> [!WARNING]
+> Running the following examples may permanently delete containers and blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-container-overview.md).
+
+```azurecli-interactive
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+containerName="demo-container-1"
+containerPrefix="demo-container-"
+
+# Delete a single named container
+az storage container delete \
+ --name $containerName \
+ --auth-mode login
+
+# Delete containers by iterating a loop
+list=$(az storage container list \
+ --query "[].name" \
+ --auth-mode login \
+ --prefix $containerPrefix \
+ --output tsv)
+for item in $list
+do
+ az storage container delete \
+ --name $item \
+ --auth-mode login
+done
+```
+
+If you have container soft delete enabled for your storage account, then it's possible to retrieve containers that have been deleted. If your storage account's soft delete data protection option is enabled, the `--include-deleted` parameter will return containers deleted within the associated retention period. The `--include-deleted` parameter can only be used in conjunction with the `--prefix` parameter when returning a list of containers. To learn more about soft delete, refer to the [Soft delete for containers](soft-delete-container-overview.md) article.
+
+Use the following example to retrieve a list of containers deleted within the storage account's associated retention period.
+
+```azurecli-interactive
+# Retrieve a list of containers including those recently deleted
+az storage container list \
+ --prefix $prefix \
+ --include-deleted \
+ --auth-mode login
+```
+
+## Restore a soft-deleted container
+
+As mentioned in the [List containers](#list-containers) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period. Before you can follow this example, you'll need to enable soft delete and configure it on at least one of your storage accounts.
+
+The following example explains how to restore a soft-deleted container with the `az storage container restore` command. You'll need to supply values for the `--name` and `--version` parameters to ensure that the correct version of the container is restored. If you don't know the version number, you can use the `az storage container list` command to retrieve it as shown in the following example.
+
+To learn more about the soft delete data protection option, refer to the [Soft delete for containers](soft-delete-container-overview.md) article.
+
+```azurecli-interactive
+#!/bin/bash
+export AZURE_STORAGE_ACCOUNT="<storage-account>"
+containerName="demo-container-1"
+
+# Restore an individual named container
+containerVersion=$(az storage container list \
+ --query "[?name=='$containerName'].[version]" \
+ --auth-mode login \
+ --output tsv \
+ --include-deleted)
+
+az storage container restore \
+ --name $containerName \
+ --deleted-version $containerVersion \
+ --auth-mode login
+```
+
+## Get a shared access signature for a container
+
+A shared access signature (SAS) provides delegated access to Azure resources. A SAS gives you granular control over how a client can access your data. For example, you can specify which resources are available to the client. You can also limit the types of operations that the client can perform, and specify the interval over which the SAS is valid.
+
+A SAS is commonly used to provide temporary and secure access to a client who wouldn't normally have permissions. To generate either a service or account SAS, you'll need to supply values for the `ΓÇô-account-name` and `-ΓÇôaccount-key` parameters. An example of this scenario would be a service that allows users read and write their own data to your storage account.
+
+Azure Storage supports three types of shared access signatures: user delegation, service, and account SAS. For more information on shared access signatures, see the [Grant limited access to Azure Storage resources using shared access signatures](../common/storage-sas-overview.md) article.
+
+> [!CAUTION]
+> Any client that possesses a valid SAS can access data in your storage account as permitted by that SAS. It's important to protect a SAS from malicious or unintended use. Use discretion in distributing a SAS, and have a plan in place for revoking a compromised SAS.
+
+The following example illustrates the process of configuring a service SAS for a specific container using the `az storage container generate-sas` command. Because it is generating a service SAS, the example first retrieves the storage account key to pass as the `--account-key` value.
+
+The example will configure the SAS with start and expiry times and a protocol. It will also specify the **delete**, **read**, **write**, and **list** permissions in the SAS using the `--permissions` parameter. You can reference the full table of permissions in the [Create a service SAS](/rest/api/storageservices/create-service-sas) article.
+
+```azurecli-interactive
+#!/bin/bash
+storageAccount="<storage-account-name>"
+export AZURE_STORAGE_ACCOUNT=$storageAccount
+containerName="demo-container-1"
+permissions="drwl"
+expiry=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'`
+
+accountKey=$(az storage account keys list \
+ --account-name $storageAccount \
+ --query "[?permissions == 'FULL'].[value]" \
+ --output tsv)
+
+accountKey=$( echo $accountKey | cut -d' ' -f1 )
+
+az storage container generate-sas \
+ --name $containerName \
+ --https-only \
+ --permissions dlrw \
+ --expiry $expiry \
+ --account-key $accountKey
+```
+
+## Clean up resources
+
+If you want to delete the environment variables as part of this how-to article, run the following script.
+
+```azurecli
+# Remove environment variables
+unset AZURE_STORAGE_ACCOUNT
+```
+
+## Next steps
+
+In this how-to article, you learned how to manage containers in Azure blob storage. To learn more about working with blob storage by using Azure CLI, explore Azure CLI samples for Blob storage.
+
+> [!div class="nextstepaction"]
+> [Azure CLI samples for Blob storage](storage-samples-blobs-cli.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
storage Blob Containers Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-containers-powershell.md
# Manage blob containers using PowerShell
-Azure blob storage allows you to store large amounts of unstructured object data. You can use Blob Storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about Blob Storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
+Azure blob storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about blob storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
This how-to article explains how to work with both individual and multiple storage container objects.
loop-container4 11/2/2021 12:22:00 AM +00:00 True
## Read container properties and metadata
-A container exposes both system properties and user-defined metadata. System properties exist on each Blob Storage resource. Some properties are read-only, while others can be read or set. Under the covers, some system properties map to certain standard HTTP headers.
+A container exposes both system properties and user-defined metadata. System properties exist on each blob storage resource. Some properties are read-only, while others can be read or set. Under the covers, some system properties map to certain standard HTTP headers.
-User-defined metadata consists of one or more name-value pairs that you specify for a Blob Storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.
+User-defined metadata consists of one or more name-value pairs that you specify for a blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don't affect how the resource behaves.
### Container properties
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
If a blob has snapshots, the blob can't be deleted unless the snapshots are also
You can also delete one or more active snapshots without deleting the base blob. In this case, the snapshot is soft-deleted.
-If a directory is deleted in an account that has the hierarchical namespace feature enabled on it, the directory and all its contents are marked as soft-deleted.
+If a directory is deleted in an account that has the hierarchical namespace feature enabled on it, the directory and all its contents are marked as soft-deleted. Only the soft-deleted directory can be accessed. In order to access the contents of the soft-deleted directory, the soft-deleted directory needs to be undeleted first.
Soft-deleted objects are invisible unless they're explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
For premium storage accounts, soft-deleted snapshots don't count toward the per-
You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
-In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs.
+In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents. If you rename a directory that contains soft deleted blobs, those soft deleted blobs become disconnected from the directory. If you want to restore those blobs, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted blobs. You cannot access the contents of a soft-deleted directory until after the directory has been undeleted.
Calling **Undelete Blob** on a blob that isn't soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and isn't soft-deleted, then calling **Undelete Blob** has no effect.
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-manage-find-blobs.md
This section describes known issues and conditions.
- Uploading page blobs with index tags doesn't persist the tags. Set the tags after uploading a page blob. -- If Blob storage versioning is enabled, you can still use index tags on the current version. Index tags are preserved for previous versions, but those tags aren't passed to the blob index engine, so you cannot them to retrieve previous versions. If you promote a previous version to the current version, then the tags of that previous version become the tags of the current version. Because those tags are associated with the current version, they are passed to the blob index engine and you can query them.
+- If Blob storage versioning is enabled, you can still use index tags on the current version. Index tags are preserved for previous versions, but those tags aren't passed to the blob index engine, so you cannot use them to retrieve previous versions. If you promote a previous version to the current version, then the tags of that previous version become the tags of the current version. Because those tags are associated with the current version, they are passed to the blob index engine and you can query them.
- There is no API to determine if index tags are indexed.
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/redundancy-migration.md
Previously updated : 01/19/2022 Last updated : 11/30/2021
For an overview of each of these options, see [Azure Storage redundancy](storage
## Switch between types of replication
-You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting. However, if you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must perform a manual migration.
+You can switch a storage account from one type of replication to any other type, but some scenarios are more straightforward than others. If you want to add or remove geo-replication or read access to the secondary region, you can use the Azure portal, PowerShell, or Azure CLI to update the replication setting. However, if you want to change how data is replicated in the primary region, by moving from LRS to ZRS or vice versa, then you must either perform a manual migration or request a live migration. And if you want to move from ZRS to GZRS or RA-GZRS, then you must perform a live migration, unless you are performing a failback operation after failover.
The following table provides an overview of how to switch from each type of replication to another:
The following table provides an overview of how to switch from each type of repl
|--|-||-|| | <b>…from LRS</b> | N/A | Use Azure portal, PowerShell, or CLI to change the replication setting<sup>1,2</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>5</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Switch to GRS/RA-GRS first and then request a live migration<sup>3</sup> | | <b>…from GRS/RA-GRS</b> | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | Perform a manual migration <br /><br /> OR <br /><br /> Switch to LRS first and then request a live migration<sup>3</sup> | Perform a manual migration <br /><br /> OR <br /><br /> Request a live migration<sup>3</sup> |
-| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use Azure portal, PowerShell, or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
+| <b>…from ZRS</b> | Perform a manual migration | Perform a manual migration | N/A | Request a live migration<sup>3</sup> <br /><br /> OR <br /><br /> Use PowerShell or Azure CLI to change the replication setting as part of a failback operation only<sup>4</sup> |
| <b>…from GZRS/RA-GZRS</b> | Perform a manual migration | Perform a manual migration | Use Azure portal, PowerShell, or CLI to change the replication setting | N/A | <sup>1</sup> Incurs a one-time egress charge.<br /> <sup>2</sup> Migrating from LRS to GRS is not supported if the storage account contains blobs in the archive tier.<br /> <sup>3</sup> Live migration is supported for standard general-purpose v2 and premium file share storage accounts. Live migration is not supported for premium block blob or page blob storage accounts.<br />
-<sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with Azure portal, PowerShell, or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
+<sup>4</sup> After an account failover to the secondary region, it's possible to initiate a fail back from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [Use caution when failing back to the original primary](storage-disaster-recovery-guidance.md#use-caution-when-failing-back-to-the-original-primary). <br />
<sup>5</sup> Migrating from LRS to ZRS is not supported if the storage account contains Azure Files NFSv4.1 shares. <br /> > [!CAUTION]
If you want to change how data in your storage account is replicated in the prim
When you perform a manual migration from LRS to ZRS in the primary region or vice versa, the destination storage account can be geo-redundant and can also be configured for read access to the secondary region. For example, you can migrate an LRS account to a GZRS or RA-GZRS account in one step.
+You cannot use a manual migration to migrate from ZRS to GZRS or RA-GZRS. You must request a live migration.
+ A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a live migration option. A live migration is an in-place migration with no downtime. With a manual migration, you copy the data from your existing storage account to a new storage account that uses ZRS in the primary region. To perform a manual migration, you can use one of the following options:
With a manual migration, you copy the data from your existing storage account to
## Request a live migration to ZRS, GZRS, or RA-GZRS
-If you need to migrate your storage account from LRS to ZRS in the primary region with no application downtime, you can request a live migration from Microsoft. To migrate from LRS to GZRS or RA-GZRS, first switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from GRS or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then request a live migration.
+If you need to migrate your storage account from LRS to ZRS in the primary region with no application downtime, you can request a live migration from Microsoft. To migrate from LRS to GZRS or RA-GZRS, first switch to GRS or RA-GRS and then request a live migration. Similarly, you can request a live migration from ZRS, GRS, or RA-GRS to GZRS or RA-GZRS. To migrate from GRS or RA-GRS to ZRS, first switch to LRS, then request a live migration.
During a live migration, you can access data in your storage account with no loss of durability or availability. The Azure Storage SLA is maintained during the migration process. There is no data loss associated with a live migration. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
You can use the same technique for an account that has the hierarchical namespac
| Service | Resource Provider Name | Purpose | | :-- | :- | :-- | | Azure API Management | Microsoft.ApiManagement/service | Enables Api Management service access to storage accounts behind firewall using policies. [Learn more](../../api-management/api-management-authentication-policies.md#use-managed-identity-in-send-request-policy). |
-| Azure Cache for Redis | Microsoft.Cache/Redis | Allows access to storage accounts through Azure Cache for Redis. |
+| Azure Cache for Redis | Microsoft.Cache/Redis | Allows access to storage accounts through Azure Cache for Redis. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md)|
| Azure Cognitive Search | Microsoft.Search/searchServices | Enables Cognitive Search services to access storage accounts for indexing, processing and querying. | | Azure Cognitive Services | Microsoft.CognitiveService/accounts | Enables Cognitive Services to access storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).| | Azure Container Registry Tasks | Microsoft.ContainerRegistry/registries | ACR Tasks can access storage accounts when building container images. |
stream-analytics Stream Analytics Javascript User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-javascript-user-defined-functions.md
Title: Azure Stream Analytics JavaScript user-defined functions description: This article is an introduction to JavaScript user-defined functions in Stream Analytics.--++
This method follows the same implementation behavior as the one available in Int
```javascript function main(datetime){ const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };
- return event.toLocaleDateString('de-DE', options);
+ return datetime.toLocaleDateString('de-DE', options);
} ```
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/workspaces-encryption.md
Previously updated : 07/20/2021 Last updated : 01/27/2022
This article will describe:
## Encryption of data at rest
-A complete Encryption-at-Rest solution ensures the data is never persisted in un-encrypted form. Double encryption of data at rest mitigates threats with two, separate layers of encryption to protect against compromises of any single layer. Azure Synapse Analytics offers a second layer of encryption for the data in your workspace with a customer-managed key. This key is safeguarded in your [Azure Key Vault](../../key-vault/general/overview.md), which allows you to take ownership of key management and rotation.
+A complete Encryption-at-Rest solution ensures the data is never persisted in unencrypted form. Double encryption of data at rest mitigates threats with two, separate layers of encryption to protect against compromises of any single layer. Azure Synapse Analytics offers a second layer of encryption for the data in your workspace with a customer-managed key. This key is safeguarded in your [Azure Key Vault](../../key-vault/general/overview.md), which allows you to take ownership of key management and rotation.
The first layer of encryption for Azure services is enabled with platform-managed keys. By default, Azure Disks, and data in Azure Storage accounts are automatically encrypted at rest. Learn more about how encryption is used in Microsoft Azure in the [Azure Encryption Overview](../../security/fundamentals/encryption-overview.md). ## Azure Synapse encryption
-This section will help you better understand how customer-managed key encryption is enabled and enforced in Synapse workspaces. This encryption uses existing keys or new keys generated in Azure Key Vault. A single key is used to encrypt all the data in a workspace. Synapse workspaces support RSA 2048 and 3072 byte-sized keys, as well as RSA-HSM keys.
+This section will help you better understand how customer-managed key encryption is enabled and enforced in Synapse workspaces. This encryption uses existing keys or new keys generated in Azure Key Vault. A single key is used to encrypt all the data in a workspace. Synapse workspaces support RSA 2048 and 3072 byte-sized keys, and RSA-HSM keys.
> [!NOTE] > Synapse workspaces do not support the use of EC, EC-HSM, and oct-HSM keys for encryption.
Workspaces can be configured to enable double encryption with a customer-managed
### Key access and workspace activation
-The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or [Azure Key Vault RBAC access](../../key-vault/general/rbac-guide.md). When granting permissions via an Azure Key Vault access policy, choose the ["Application-only"](../../key-vault/general/security-features.md#key-vault-authentication-options) option during policy creation (select the workspaces managed identity and do not add it as an authorized application).
+The Azure Synapse encryption model with customer-managed keys involves the workspace accessing the keys in Azure Key Vault to encrypt and decrypt as needed. The keys are made accessible to the workspace either through an access policy or [Azure Key Vault RBAC](../../key-vault/general/rbac-guide.md). When granting permissions via an Azure Key Vault access policy, choose the ["Application-only"](../../key-vault/general/security-features.md#key-vault-authentication-options) option during policy creation (select the workspaces managed identity and do not add it as an authorized application).
- The workspace managed identity must be granted the permissions it needs on the key vault before the workspace can be activated. This phased approach to workspace activation ensures that data in the workspace is encrypted with the customer-managed key. Note that encryption can be enabled or disabled for dedicated SQL Pools- each pool is not enabled for encryption by default.
+ The workspace-managed identity must be granted the permissions it needs on the key vault before the workspace can be activated. This phased approach to workspace activation ensures that data in the workspace is encrypted with the customer-managed key. Encryption can be enabled or disabled for dedicated SQL Pools- each pool is not enabled for encryption by default.
#### Using a User-assigned Managed identity
-Workspaces can be configured to use a [User-assigned Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to access your customer-managed key stored in Azure Key Vault. Configure a User-assigned Managed identity to avoid phased activation of your Azure Synapse workspace when using double encryption with customer managed keys. The Managed Identity Contributor built-in role is required to assign a user-assigned managed identity to an Azure Synapse workspace.
+Workspaces can be configured to use a [User-assigned Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to access your customer-managed key stored in Azure Key Vault. Configure a User-assigned Managed identity to avoid phased activation of your Azure Synapse workspace when using double encryption with customer-managed keys. The Managed Identity Contributor built-in role is required to assign a user-assigned managed identity to an Azure Synapse workspace.
> [!NOTE] > A User-assigned Managed Identity cannot be configured to access customer-managed key when Azure Key Vault is behind a firewall.
Workspaces can be configured to use a [User-assigned Managed identity](../../act
#### Permissions
-To encrypt or decrypt data at rest, the managed identity must have the following permissions:
+To encrypt or decrypt data at rest, the managed identity must have the following permissions. Similarly, if you are using a Resource Manager template to create a new key, the 'keyOps' parameter of the template must have the following permissions:
+ * WrapKey (to insert a key into Key Vault when creating a new key). * UnwrapKey (to get the key for decryption). * Get (to read the public part of a key) #### Workspace activation
-If you do not configure a user-assigned managed identity to access customer managed keys during workspace creation, your workspace will remain in a "Pending" state until activation succeeds. The workspace must be activated before you can fully use all functionality. For example, you can only create a new dedicated SQL pool once activation succeeds. Grant the workspace managed identity access to the key vault and click on the activation link in the workspace Azure portal banner. Once the activation completes successfully, your workspace is ready to use with the assurance that all data in it is protected with your customer-managed key. As previously noted, the key vault must have purge protection enabled for activation to succeed.
+If you do not configure a user-assigned managed identity to access customer-managed keys during workspace creation, your workspace will remain in a "Pending" state until activation succeeds. The workspace must be activated before you can fully use all functionality. For example, you can only create a new dedicated SQL pool once activation succeeds. Grant the workspace-managed identity access to the key vault and select on the activation link in the workspace Azure portal banner. Once the activation completes successfully, your workspace is ready to use with the assurance that all data in it is protected with your customer-managed key. As previously noted, the key vault must have purge protection enabled for activation to succeed.
:::image type="content" source="./media/workspaces-encryption/workspace-activation.png" alt-text="This diagram shows the banner with the activation link for the workspace." lightbox="./media/workspaces-encryption/workspace-activation.png"::: ### Manage the workspace customer-managed key
-You can change the customer-managed key used to encrypt data from the **Encryption** page in the Azure portal. Here too, you can choose a new key using a key identifier or select from Key Vaults that you have access to in the same region as the workspace. If you choose a key in a different key vault from the ones previously used, grant the workspace managed identity "Get", "Wrap", and "Unwrap" permissions on the new key vault. The workspace will validate its access to the new key vault and all data in the workspace will be re-encrypted with the new key.
+You can change the customer-managed key used to encrypt data from the **Encryption** page in the Azure portal. Here too, you can choose a new key using a key identifier or select from Key Vaults that you have access to in the same region as the workspace. If you choose a key in a different key vault from the ones previously used, grant the workspace-managed identity "Get", "Wrap", and "Unwrap" permissions on the new key vault. The workspace will validate its access to the new key vault and all data in the workspace will be re-encrypted with the new key.
:::image type="content" source="./media/workspaces-encryption/workspace-encryption-management.png" alt-text="This diagram shows the workspace Encryption section in the Azure portal." lightbox="./media/workspaces-encryption/workspace-encryption-management.png":::
SQL Transparent Data Encryption (TDE) is available for dedicated SQL Pools in wo
[Use built-in Azure Policies to implement encryption protection for Synapse workspaces](../policy-reference.md)
-[Create an Azure key vault and a key by using ARM template](../../key-vault/keys/quick-create-template.md)
+[Create an Azure key vault and a key by using Resource Manager template](../../key-vault/keys/quick-create-template.md)
synapse-analytics How To Query Analytical Store Spark 3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md
Previously updated : 08/26/2021 Last updated : 01/27/2022
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/automatic-vm-guest-patching.md
Title: Automatic VM Guest Patching for Azure VMs description: Learn how to automatically patch virtual machines in Azure.-+ Last updated 10/20/2021-+
As a new rollout is triggered every month, a VM will receive at least one patch
## Supported OS images Only VMs created from certain OS platform images are currently supported. Custom images are currently not supported.
+> [!NOTE]
+> Automatic VM guest patching is only supported on Gen1 images.
+ The following platform SKUs are currently supported (and more are added periodically): | Publisher | OS Offer | Sku |
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generalize.md
First you'll deprovision the VM by using the Azure VM agent to delete machine-sp
4. After the command completes, enter **exit** to close the SSH client. The VM will still be running at this point.
+Deallocate the VM that you deprovisioned with `az vm deallocate` so that it can be generalized.
+
+```azurecli-interactive
+az vm deallocate \
+ --resource-group myResourceGroup \
+ --name myVM
+```
+ Then the VM needs to be marked as generalized on the platform. ```azurecli-interactive
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-applications.md
Most 3rd party applications in Windows are available as .exe or .msi installers.
Installer executables typically launch a user interface (UI) and require someone to click through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
-Cmd.exe also expects executable files to have the extension .exe, so you need to rename the file to have te .exe extension.
+Cmd.exe also expects executable files to have the extension .exe, so you need to rename the file to have the .exe extension.
If I wanted to create a VM application package for myApp.exe, which ships as an executable, my VM Application is called 'myApp', so I write the command assuming that the application package is in the current directory:
virtual-network Virtual Network Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-disaster-recovery-guidance.md
A: The virtual network and the resources in the affected region remains inaccess
![Simple Virtual Network Diagram](./media/virtual-network-disaster-recovery-guidance/vnet.png)
-**Q: What can I to do re-create the same virtual network in a different region?**
+**Q: What can I do to re-create the same virtual network in a different region?**
A: Virtual networks are fairly lightweight resources. You can invoke Azure APIs to create a VNet with the same address space in a different region. To recreate the same environment that was present in the affected region, you make API calls to redeploy the Cloud Services web and worker roles, and the virtual machines that you had. If you have on-premises connectivity, such as in a hybrid deployment, you have to deploy a new VPN Gateway, and connect to your on-premises network.
virtual-wan Sd Wan Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/sd-wan-connectivity-architecture.md
Azure Virtual WAN is a networking service that brings together many cloud connectivity and security services with a single operational interface. These services include branch (via Site-to-site VPN), remote user (Point-to-site VPN), private (ExpressRoute) connectivity, intra-cloud transitive connectivity for Vnets, VPN and ExpressRoute interconnectivity, routing, Azure Firewall, and encryption for private connectivity.
-Although Azure Virtual WAN itself is a Software Defined WAN (SD-WAN), it is also designed to enable seamless interconnection with the premises-based SD-WAN technologies and services. Many such services are offered by our [Virtual WAN](virtual-wan-locations-partners.md) ecosystem and Azure Networking Managed Services partners [(MSPs)](../networking/networking-partners-msp.md). Enterprises that are transforming their private WAN to SD-WAN have options when interconnecting their private SD-WAN with Azure Virtual WAN. Enterprises can choose from these options:
+Although Azure Virtual WAN is a cloud-based SD-WAN that provides a rich suite of Azure first-party connectivity, routing, and security services, Azure Virtual WAN also is designed to enable seamless interconnection with premises-based SD-WAN and SASE technologies and services. Many such services are offered by our [Virtual WAN](virtual-wan-locations-partners.md) ecosystem and Azure Networking Managed Services partners [(MSPs)](../networking/networking-partners-msp.md). Enterprises that are transforming their private WAN to SD-WAN have options when interconnecting their private SD-WAN with Azure Virtual WAN. Enterprises can choose from these options:
* Direct Interconnect Model * Direct Interconnect Model with NVA-in-VWAN-hub
vpn-gateway Point To Site Vpn Client Configuration Azure Cert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/point-to-site-vpn-client-configuration-azure-cert.md
Title: 'Create & install P2S VPN client configuration files: certificate authentication'
+ Title: 'P2S VPN client profile configuration files: certificate authentication'
description: Learn how to generate and install VPN client configuration files for Windows, Linux (strongSwan), and macOS. This article applies to VPN Gateway P2S configurations that use certificate authentication.
Last updated 07/15/2021
-# Generate and install VPN client configuration files for P2S certificate authentication
+# Generate and install VPN client profile configuration files for certificate authentication
-When you connect to an Azure VNet using Point-to-Site and certificate authentication, you use the VPN client that is natively installed on the operating system from which you are connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients for Windows, Mac IKEv2 VPN, or Linux.
+When you connect to an Azure VNet using Point-to-Site and certificate authentication, you use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients for Windows, Mac IKEv2 VPN, or Linux.
The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the Point-to-Site VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect.
The VPN client configuration files that you generate are specific to the P2S VPN
You can generate client configuration files using PowerShell, or by using the Azure portal. Either method returns the same zip file. Unzip the file to view the following folders: * **WindowsAmd64** and **WindowsX86**, which contain the Windows 32-bit and 64-bit installer packages, respectively. The **WindowsAmd64** installer package is for all supported 64-bit Windows clients, not just Amd.
-* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder is not present.
+* **Generic**, which contains general information used to create your own VPN client configuration. The Generic folder is provided if IKEv2 or SSTP+IKEv2 was configured on the gateway. If only SSTP is configured, then the Generic folder isnΓÇÖt present.
### <a name="zipportal"></a>Generate files using the Azure portal
You can generate client configuration files using PowerShell, or by using the Az
## <a name="installmac"></a>Mac (macOS)
-In order to connect to Azure, you must manually configure the native IKEv2 VPN client. Azure does not provide a *mobileconfig* file. You can find all of the information that you need for configuration in the **Generic** folder.
+In order to connect to Azure, you must manually configure the native IKEv2 VPN client. Azure doesnΓÇÖt provide a *mobileconfig* file. You can find all of the information that you need for configuration in the **Generic** folder.
-If you don't see the Generic folder in your download, it's likely that IKEv2 was not selected as a tunnel type. Note that the VPN gateway Basic SKU does not support IKEv2. On the VPN gateway, verify that the SKU is not Basic. Then, select IKEv2 and generate the zip file again to retrieve the Generic folder.
+If you don't see the Generic folder in your download, it's likely that IKEv2 wasnΓÇÖt selected as a tunnel type. Note that the VPN gateway Basic SKU doesnΓÇÖt support IKEv2. On the VPN gateway, verify that the SKU isnΓÇÖt Basic. Then, select IKEv2 and generate the zip file again to retrieve the Generic folder.
The Generic folder contains the following files: * **VpnSettings.xml**, which contains important settings like server address and tunnel type. 
-* **VpnServerRoot.cer**, which contains the root certificate required to validate the Azure VPN Gateway during P2S connection setup.
+* **VpnServerRoot.cer**, which contains the root certificate required to validate the Azure VPN gateway during P2S connection setup.
Use the following steps to configure the native VPN client on Mac for certificate authentication. These steps must be completed on every Mac that you want to connect to Azure. ### Import root certificate file
-1. Copy to the root certificate file to your Mac. Double-click the certificate. The certificate will either automatically install, or you will see the **Add Certificates** page.
+1. Copy to the root certificate file to your Mac. Double-click the certificate. The certificate will either automatically install, or youΓÇÖll see the **Add Certificates** page.
1. On the **Add Certificates** page, select **login** from the dropdown. :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/login.png" alt-text="Screenshot shows Add Certificates page with login selected.":::
Verify that both the client and the root certificate are installed. The client c
**Catalina:** * For **Authentication Settings** select **None**.
- * Select **Certificate**, click **Select** and select the correct client certificate that you installed earlier. Then, click **OK**.
+ * Select **Certificate**, select **Select** and select the correct client certificate that you installed earlier. Then, select **OK**.
:::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/catalina.png" alt-text="Screenshot shows the Network window with None selected for Authentication Settings and Certificate selected."::: **Big Sur:**
- * Click **Authentication Settings**, then select **Certificate**. 
+ * Select **Authentication Settings**, then select **Certificate**. 
:::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/authentication-certificate.png" alt-text="Screenshot shows authentication settings with certificate selected." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/authentication-certificate.png":::
- * Click **Select** to open the **Choose An Identity** page. The **Choose An Identity** page displays a list of certificates for you to choose from. If you are unsure which certificate to use, you can click **Show Certificate** to see more information about each certificate.
+ * Select **Select** to open the **Choose An Identity** page. The **Choose An Identity** page displays a list of certificates for you to choose from. If youΓÇÖre unsure which certificate to use, you can select **Show Certificate** to see more information about each certificate.
:::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/show-certificate.png" alt-text="Screenshot shows certificate properties.." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/show-certificate.png"::: * Select the proper certificate, then select **Continue**. :::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/choose-identity.png" alt-text="Screenshot shows Choose an Identity, where you can select a certificate." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/choose-identity.png":::
- * On the **Authentication Settings** page, verify that the correct certificate is shown, then click **OK**.
+ * On the **Authentication Settings** page, verify that the correct certificate is shown, then select **OK**.
:::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/certificate.png" alt-text="Screenshot shows the Choose An Identity dialog box where you can select the proper certificate." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/certificate.png":::
-1. For both Catalina and Big Sur, in the **Local ID** field, specify the name of the certificate. In this example, it is `P2SChildCert`.
+1. For both Catalina and Big Sur, in the **Local ID** field, specify the name of the certificate. In this example, itΓÇÖs `P2SChildCert`.
:::image type="content" source="./media/point-to-site-vpn-client-configuration-azure-cert/local-id.png" alt-text="Screenshot shows local ID value." lightbox="./media/point-to-site-vpn-client-configuration-azure-cert/expanded/local-id.png"::: 1. Select **Apply** to save all changes.
Verify that both the client and the root certificate are installed. The client c
### <a name="genlinuxcerts"></a>Generate certificates
-If you have not already generated certificates, use the following steps:
+If you havenΓÇÖt already generated certificates, use the following steps:
[!INCLUDE [strongSwan certificates](../../includes/vpn-gateway-strongswan-certificates-include.md)] ### <a name="install"></a>Install and configure
-The following instructions were created on Ubuntu 18.0.4. Ubuntu 16.0.10 does not support strongSwan GUI. If you want to use Ubuntu 16.0.10, you will have to use the [command line](#linuxinstallcli). The examples below may not match screens that you see, depending on your version of Linux and strongSwan.
+The following instructions were created on Ubuntu 18.0.4. Ubuntu 16.0.10 doesnΓÇÖt support strongSwan GUI. If you want to use Ubuntu 16.0.10, youΓÇÖll have to use the [command line](#linuxinstallcli). The following examples may not match screens that you see, depending on your version of Linux and strongSwan.
1. Open the **Terminal** to install **strongSwan** and its Network Manager by running the command in the example.
The following instructions were created on Ubuntu 18.0.4. Ubuntu 16.0.10 does no
### Generate certificates
-If you have not already generated certificates, use the following steps:
+If you havenΓÇÖt already generated certificates, use the following steps:
[!INCLUDE [strongSwan certificates](../../includes/vpn-gateway-strongswan-certificates-include.md)]
If you have not already generated certificates, use the following steps:
1. Extract the file. 1. From the **Generic** folder, copy or move the **VpnServerRoot.cer** to **/etc/ipsec.d/cacerts**. 1. Copy or move **cp client.p12** to **/etc/ipsec.d/private/**. This file is the client certificate for the VPN gateway.
-1. Open the **VpnSettings.xml** file and copy the `<VpnServer>` value. You will use this value in the next step.
-1. Adjust the values in the example below, then add the example to the **/etc/ipsec.conf** configuration.
+1. Open the **VpnSettings.xml** file and copy the `<VpnServer>` value. YouΓÇÖll use this value in the next step.
+1. Adjust the values in the following example, then add the example to the **/etc/ipsec.conf** configuration.
``` conn azure