Updates from: 02/19/2021 04:09:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/troubleshoot-alerts.md
This error is unrecoverable. To resolve the alert, [delete your existing managed
Some automatically generated service principals are used to manage and create resources for a managed domain. If the access permissions for one of these service principals is changed, the domain is unable to correctly manage resources. The following steps show you how to understand and then grant access permissions to a service principal:
-1. Read about [role-based access control and how to grant access to applications in the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Read about [Azure role-based access control and how to grant access to applications in the Azure portal](../role-based-access-control/role-assignments-portal.md).
2. Review the access that the service principal with the ID *abba844e-bc0e-44b0-947a-dc74e5d09022* has and grant the access that was denied at an earlier date. ## AADDS112: Not enough IP address in the managed domain
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/fido2-compatibility.md
This table shows support for authenticating Azure Active Directory (Azure AD) an
| **macOS** | ![Chrome supports USB on macOS for AAD accounts.][y] | ![Chrome does not support NFC on macOS for AAD accounts.][n] | ![Chrome does not support BLE on macOS for AAD accounts.][n] | ![Edge supports USB on macOS for AAD accounts.][y] | ![Edge does not support NFC on macOS for AAD accounts.][n] | ![Edge does not support BLE on macOS for AAD accounts.][n] | ![Firefox does not support USB on macOS for AAD accounts.][n] | ![Firefox does not support NFC on macOS for AAD accounts.][n] | ![Firefox does not support BLE on macOS for AAD accounts.][n] | | **Linux** | ![Chrome supports USB on Linux for AAD accounts.][y] | ![Chrome does not support NFC on Linux for AAD accounts.][n] | ![Chrome does not support BLE on Linux for AAD accounts.][n] | ![Edge does not support USB on Linux for AAD accounts.][n] | ![Edge does not support NFC on Linux for AAD accounts.][n] | ![Edge does not support BLE on Linux for AAD accounts.][n] | ![Firefox does not support USB on Linux for AAD accounts.][n] | ![Firefox does not support NFC on Linux for AAD accounts.][n] | ![Firefox does not support BLE on Linux for AAD accounts.][n] |
+## Unsupported browsers
+
+The following operating system and browser combinations are not supported, but future support and testing is being investigated. If you would like to see additional operating system and browser support, please leave feedback using the product feedback tool at the bottom of the page.
+
+| Operating system | Browser |
+| - | - |
+| iOS | Safari, Brave |
+| macOS | Safari |
+| Android | Chrome |
+| ChromeOS | Chrome |
+ ## Operating system versions tested The information in the table above was tested for the following operating system versions.
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
See [How the sample works](#how-the-sample-works) for an illustration.
This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node) with the authorization code flow.
-> [!IMPORTANT]
-> MSAL Node [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
- ## Prerequisites * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
For example, if you received the error code "AADSTS50058" then do a search in [h
| AADSTS50168 | ChromeBrowserSsoInterruptRequired - The client is capable of obtaining an SSO token through the Windows 10 Accounts extension, but the token was not found in the request or the supplied token was expired. | | AADSTS50169 | InvalidRequestBadRealm - The realm is not a configured realm of the current service namespace. | | AADSTS50170 | MissingExternalClaimsProviderMapping - The external controls mapping is missing. |
+| AADSTS50173 | FreshTokenNeeded - The provided grant has expired due to it being revoked, and a fresh auth token is needed. Either an admin or a user revoked the tokens for this user, causing subsequent token refreshes to fail and require reauthentication. Have the user sign in again. |
| AADSTS50177 | ExternalChallengeNotSupportedForPassthroughUsers - External challenge is not supported for passthrough users. | | AADSTS50178 | SessionControlNotSupportedForPassthroughUsers - Session control is not supported for passthrough users. | | AADSTS50180 | WindowsIntegratedAuthMissing - Integrated Windows authentication is needed. Enable the tenant for Seamless SSO. |
For example, if you received the error code "AADSTS50058" then do a search in [h
## Next steps
-* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
+* Have a question or can't find what you're looking for? Create a GitHub issue or see [Support and help options for developers](./developer-support-help-options.md) to learn about other ways you can get help and support.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
az role assignment create \
For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the following articles: -- [Add or remove Azure role assignments using Azure CLI](../../role-based-access-control/role-assignments-cli.md)-- [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md)-- [Add or remove Azure role assignments using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
+- [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md)
+- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
## Using Conditional Access
active-directory Active Directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-faq.md
Azure AD paid services like Enterprise Mobility + Security complement other web
**A:** By default, the person who signs up for an Azure subscription is assigned the Owner role for Azure resources. An Owner can use either a Microsoft account or a work or school account from the directory that the Azure subscription is associated with. This role is authorized to manage services in the Azure portal.
-If others need to sign in and access services by using the same subscription, you can assign them the appropriate [built-in role](../../role-based-access-control/built-in-roles.md). For additional information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+If others need to sign in and access services by using the same subscription, you can assign them the appropriate [built-in role](../../role-based-access-control/built-in-roles.md). For additional information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
By default, the person who signs up for an Azure subscription is assigned the Global Administrator role for the directory. The Global Administrator has access to all Azure AD directory features. Azure AD has a different set of administrator roles to manage the directory and identity-related features. These administrators will have access to various features in the Azure portal. The administrator's role determines what they can do, like create or edit users, assign administrative roles to others, reset user passwords, manage user licenses, or manage domains. For additional information on Azure AD directory admins and their roles, see [Assign a user to administrator roles in Azure Active Directory](active-directory-users-assign-role-azure-portal.md) and [Assigning administrator roles in Azure Active Directory](../roles/permissions-reference.md).
active-directory Active Directory How Subscriptions Associated Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
Before you can associate or add your subscription, do the following tasks:
- Sign in using an account that:
- - Has an [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment for the subscription. For information about how to assign the Owner role, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ - Has an [Owner](../../role-based-access-control/built-in-roles.md#owner) role assignment for the subscription. For information about how to assign the Owner role, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
- Exists in both the current directory and in the new directory. The current directory is associated with the subscription. You'll associate the new directory with the subscription. For more information about getting access to another directory, see [Add Azure Active Directory B2B collaboration users in the Azure portal](../external-identities/add-users-administrator.md). - Make sure you're not using an Azure Cloud Service Providers (CSP) subscription (MS-AZR-0145P, MS-AZR-0146P, MS-AZR-159P), a Microsoft Internal subscription (MS-AZR-0015P), or a Microsoft Imagine subscription (MS-AZR-0144P).
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-sharepoint-server-saml.md
+
+ Title: Publish on=premises SharePoint with Application Proxy - Azure AD
+description: Covers the basics about how to integrate an on-premises SharePoint server with Azure AD Application Proxy for SAML.
+
+documentationcenter: ''
+++++
+ na
+ms.devlang: na
+ Last updated : 10/02/2019++++++
+# Integrate with SharePoint (SAML)
+
+This step-by-step guide explains how to secure the access to the [Azure Active Directory integrated on-premises Sharepoint (SAML)](https://docs.microsoft.com/azure/active-directory/saas-apps/sharepoint-on-premises-tutorial) using Azure AD Application Proxy, where users in your organization (Azure AD, B2B) connect to Sharepoint through the Internet.
+
+> [!NOTE]
+> If you're new to Azure AD Application Proxy and want to learn more, see [Remote access to on-premises applications through Azure AD Application Proxy](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy).
+
+There are three primary advantages of this setup:
+
+- Azure AD Application Proxy ensures that authenticated traffic can reach your internal network and the Sharepoint server.
+- Your users can access the Sharepoint sites as usual without using VPN.
+- You can control the access by user assignment on Azure AD Application Proxy level and you can increase the security with Azure AD features like Conditional Access and Multi-Factor Authentication (MFA).
+
+This process requires two Enterprise Applications. One is a SharePoint on-premises instance that you publish from the gallery to your list of managed SaaS apps. The second is an on-premises application (non-gallery application) you'll use to publish the first Enterprise Gallery Application.
+
+## Prerequisites
+
+To complete this configuration, you need the following resources:
+ - A SharePoint 2013 farm or newer. The Sharepoint farm must be [integrated with Azure AD](https://docs.microsoft.com/azure/active-directory/saas-apps/sharepoint-on-premises-tutorial).
+ - An Azure AD tenant with a plan that includes Application Proxy. Learn more about [Azure AD plans and pricing](https://azure.microsoft.com/pricing/details/active-directory/).
+ - A [custom, verified domain](https://docs.microsoft.com/azure/active-directory/fundamentals/add-custom-domain) in the Azure AD tenant. The verified domain must match the SharePoint URL suffix.
+ - An SSL certificate is required. See the details in [custom domain publishing](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy-configure-custom-domain).
+ - On-premises Active Directory users must be synchronized with Azure AD Connect, and must be configure to [sign in to Azure](https://docs.microsoft.com/azure/active-directory/hybrid/plan-connect-user-signin).
+ - For cloud-only and B2B guest users, you need to [grant access to a guest account to SharePoint on-premises in the Azure portal](https://docs.microsoft.com/azure/active-directory/saas-apps/sharepoint-on-premises-tutorial#grant-access-to-a-guest-account-to-sharepoint-on-premises-in-the-azure-portal).
+ - An Application Proxy connector installed and running on a machine within the corporate domain.
++
+## Step 1: Integrate SharePoint on-premises with Azure AD
+
+1. Configure the SharePoint on-premises app. For more information, see [Tutorial: Azure Active Directory single sign-on integration with SharePoint on-premises](https://docs.microsoft.com/azure/active-directory/saas-apps/sharepoint-on-premises-tutorial).
+2. Validate the configuration before moving to the next step. To validate, try to access the SharePoint on-premises from the internal network and confirm it's accessible internally.
++
+## Step 2: Publish the Sharepoint on-premises application with Application Proxy
+
+In this step, you create an application in your Azure AD tenant that uses Application Proxy. You set the external URL and specify the internal URL, both of which are used later in SharePoint.
+
+> [!NOTE]
+> The Internal and External URLs must match the **Sign on URL** in the SAML Based Application configuration in Step 1.
+
+ ![Screenshot that shows the Sign on URL value.](./media/application-proxy-integrate-with-sharepoint-server/sso-url-saml.png)
++
+ 1. Create a new Azure AD Application Proxy application with custom domain. For step-by-step instructions, see [Custom domains in Azure AD Application Proxy](https://docs.microsoft.com/azure/active-directory/manage-apps/application-proxy-configure-custom-domain).
+
+ - Internal URL: https://portal.contoso.com/
+ - External URL: https://portal.contoso.com/
+ - Pre-Authentication: Azure Active Directory
+ - Translate URLs in Headers: No
+ - Translate URLs in Application Body: No
+
+ ![Screenshot that shows the options you use to create the app.](./media/application-proxy-integrate-with-sharepoint-server/create-application-azure-active-directory.png)
+
+2. Assign the [same groups](https://docs.microsoft.com/azure/active-directory/saas-apps/sharepoint-on-premises-tutorial#create-an-azure-ad-security-group-in-the-azure-portal) you assigned to the on-premises SharePoint Gallery Application.
+
+3. Finally, go to the **Properties** section and set **Visible to users?** to **No**. This option ensures that only the icon of the first application appears on the My Apps Portal (https://myapplications.microsoft.com).
+
+ ![Screenshot that shows where to set the Visible to users? option.](./media/application-proxy-integrate-with-sharepoint-server/configure-properties.png)
+
+## Step 3: Test your application
+
+Using a browser from a computer on an external network, navigate to the URL (https://portal.contoso.com/) that you configured during the publish step. Make sure you can sign in with the test account that you set up.
+
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
This tutorial shows you how to use a system-assigned managed identity for a Linu
- If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To perform the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- To perform the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top right corner of code blocks. - Run scripts locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az-login). Use an account associated with the Azure subscription in which you'd like to create resources.
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
You learn how to:
- A basic understanding of Managed identities. If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - An Azure account, [sign up for a free account](https://azure.microsoft.com/free/).-- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- You also need a Linux Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a Linux virtual machine with the Azure portal](../../virtual-machines/linux/quick-create-portal.md#create-virtual-machine)
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
This tutorial shows you how to access the Azure Resource Manager API using a Win
- A basic understanding of Managed identities. If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - An Azure account, [sign up for a free account](https://azure.microsoft.com/free/).-- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
This tutorial shows you how to use a system-assigned managed identity for a Wind
- If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To perform the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- To perform the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) - You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-datalake.md
This tutorial shows you how to use a system-assigned managed identity for a Wind
- An understanding of Managed identities. If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - An Azure account, [sign up for a free account](https://azure.microsoft.com/free/).-- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
You learn how to:
- An understanding of Managed identities. If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - An Azure account, [sign up for a free account](https://azure.microsoft.com/free/).-- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-storage-sas.md
A Service SAS provides the ability to grant limited access to objects in a stora
- An understanding of Managed identities. If you're not familiar with the managed identities for Azure resources feature, see this [overview](overview.md). - An Azure account, [sign up for a free account](https://azure.microsoft.com/free/).-- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- "Owner" permissions at the appropriate scope (your subscription or resource group) to perform required resource creation and role management steps. If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
You learn how to:
- [Create a Windows virtual machine](../../virtual-machines/windows/quick-create-portal.md) -- To perform the required resource creation and role management steps in this tutorial, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Use Role-Based Access Control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+- To perform the required resource creation and role management steps in this tutorial, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
- To run the example scripts, you have two options: - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top right corner of code blocks.
active-directory Concept Delegation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/concept-delegation.md
As an organization grows, it can be difficult to keep track of which users have
In the Azure AD portal, you can [view all the members of any role](manage-roles-portal.md), which can help you quickly check your deployment and delegate permissions.
-If youΓÇÖre interested in delegating access to Azure resources instead of administrative access in Azure AD, see [Assign an Azure role](../../role-based-access-control/role-assignments-portal.md).
+If youΓÇÖre interested in delegating access to Azure resources instead of administrative access in Azure AD, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Delegation planning
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Usage Summary Reports Reader |   | :heavy_check_mark: | :heavy_check_mark:
## Next steps
-* To learn more about how to assign a user as an administrator of an Azure subscription, see [Add or remove Azure role assignments (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)
+* To learn more about how to assign a user as an administrator of an Azure subscription, see [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
* To learn more about how resource access is controlled in Microsoft Azure, see [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md) * For details on the relationship between subscriptions and an Azure AD tenant, or for instructions to associate or add a subscription, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../fundamentals/active-directory-how-subscriptions-associated-directory.md)
advisor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
What is Azure role-based access control (Azure RBAC) ../role-based-access-control/overview.md -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
aks Use Pod Security On Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-pod-security-on-azure-policy.md
The following limitations apply only to the Azure Policy Add-on for AKS:
- [AKS Pod security policy (preview)](use-pod-security-policies.md) and the Azure Policy Add-on for AKS can't both be enabled. - Namespaces automatically excluded by Azure Policy Add-on for evaluation: _kube-system_,
- _gatekeeper-system_, and _aks-periscope_.
+ _gatekeeper-system_, and _aks-periscope_. If you use Calico network policy with Kubernetes Version 1.20
+ and above, 2 more namespaces are automatically excluded, which are _calico-system_ and _tigera-operator_.
### Recommendations
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-role-based-access-control.md
Azure API Management relies on Azure role-based access control (Azure RBAC) to e
API Management currently provides three built-in roles and will add two more roles in the near future. These roles can be assigned at different scopes, including subscription, resource group, and individual API Management instance. For instance, if you assign the "API Management Service Reader" role to a user at the resource-group level, then the user has read access to all API Management instances inside the resource group.
-The following table provides brief descriptions of the built-in roles. You can assign these roles by using the Azure portal or other tools, including Azure [PowerShell](../role-based-access-control/role-assignments-powershell.md), [Azure CLI](../role-based-access-control/role-assignments-cli.md), and [REST API](../role-based-access-control/role-assignments-rest.md). For details about how to assign built-in roles, see [Use role assignments to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).
+The following table provides brief descriptions of the built-in roles. You can assign these roles by using the Azure portal or other tools, including Azure [PowerShell](../role-based-access-control/role-assignments-powershell.md), [Azure CLI](../role-based-access-control/role-assignments-cli.md), and [REST API](../role-based-access-control/role-assignments-rest.md). For details about how to assign built-in roles, see [Assign Azure roles to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).
| Role | Read access<sup>[1]</sup> | Write access<sup>[2]</sup> | Service creation, deletion, scaling, VPN, and custom domain configuration | Access to the legacy publisher portal | Description | - | - | - | - | - | -
The [Azure Resource Manager resource provider operations](../role-based-access-c
To learn more about Role-Based Access Control in Azure, see the following articles: * [Get started with access management in the Azure portal](../role-based-access-control/overview.md)
- * [Use role assignments to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md)
+ * [Assign Azure roles to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md)
* [Custom roles in Azure RBAC](../role-based-access-control/custom-roles.md) * [Azure Resource Manager resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement)
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-nodejs.md
if (req.secure) {
::: zone-end ++
+## Monitor with Application Insights
+
+Application Insights allows you to monitor your application's performance, exceptions, and usage without making any code changes. To attach the App Insights agent, go to your web app in the Portal and select **Application Insights** under **Settings**, then select **Turn on Application Insights**. Next, select an existing App Insights resource or create a new one. Finally, select **Apply** at the bottom. To instrument your web app using PowerShell, please see [these instructions](../azure-monitor/app/azure-web-apps.md?tabs=netcore#enabling-through-powershell)
+
+This agent will monitor your server-side Node.js application. To monitor your client-side JavaScript, [add the JavaScript SDK to your project](../azure-monitor/app/javascript.md).
+
+For more information, see the [Application Insights extension release notes](../azure-monitor/app/web-app-extension-release-notes.md).
++ ## Troubleshooting When a working Node.js app behaves differently in App Service or has errors, try the following:
When a working Node.js app behaves differently in App Service or has errors, try
> [App Service Linux FAQ](faq-app-service-linux.md) ::: zone-end-
app-service Deploy Complex Application Predictably https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-complex-application-predictably.md
For more information, see [Using Azure PowerShell with Azure Resource Manager](.
This [preview tool](https://resources.azure.com) enables you to explore the JSON definitions of all the resource groups in your subscription and the individual resources. In the tool, you can edit the JSON definitions of a resource, delete an entire hierarchy of resources, and create new resources. The information readily available in this tool is very helpful for template authoring because it shows you what properties you need to set for a particular type of resource, the correct values, etc. You can even create your resource group in the [Azure Portal](https://portal.azure.com/), then inspect its JSON definitions in the explorer tool to help you templatize the resource group. ### Deploy to Azure button
-If you use GitHub for source control, you can put a [Deploy to Azure button](https://azure.microsoft.com/blog/2014/11/13/deploy-to-azure-button-for-azure-websites-2/) into your README.MD, which enables a turn-key deployment UI to Azure. While you can do this for any simple app, you can extend this to enable deploying an entire resource group by putting an azuredeploy.json file in the repository root. This JSON file, which contains the resource group template, will be used by the Deploy to Azure button to create the resource group. For an example, see the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) sample, which you will use in this tutorial.
+If you use GitHub for source control, you can put a [Deploy to Azure button](https://docs.microsoft.com/azure/azure-resource-manager/templates/deploy-to-azure-button) into your README.MD, which enables a turn-key deployment UI to Azure. While you can do this for any simple app, you can extend this to enable deploying an entire resource group by putting an azuredeploy.json file in the repository root. This JSON file, which contains the resource group template, will be used by the Deploy to Azure button to create the resource group. For an example, see the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) sample, which you will use in this tutorial.
## Get the sample resource group template So now letΓÇÖs get right to it. 1. Navigate to the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) App Service sample. 2. In readme.md, click **Deploy to Azure**.
-3. YouΓÇÖre taken to the [deploy-to-azure](https://deploy.azure.com) site and asked to input deployment parameters. Notice that most of the fields are populated with the repository name and some random strings for you. You can change all the fields if you want, but the only things you have to enter are the SQL Server administrative login and the password, then click **Next**.
+3. YouΓÇÖre taken to the [deploy-to-azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure-appservice-samples%2FToDoApp%2Fmaster%2Fazuredeploy.json) site and asked to input deployment parameters. Notice that most of the fields are populated with the repository name and some random strings for you. You can change all the fields if you want, but the only things you have to enter are the SQL Server administrative login and the password, then click **Next**.
![Shows the input deployment parameters on the deploy-to-azure site.](./media/app-service-deploy-complex-application-predictably/gettemplate-1-deploybuttonui.png) 4. Next, click **Deploy** to start the deployment process. Once the process runs to completion, click the http://todoapp*XXXX*.azurewebsites.net link to browse the deployed application.
To learn about the JSON syntax and properties for resource types deployed in thi
* [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) * [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) * [Microsoft.Web/sites/slots](/azure/templates/microsoft.web/sites/slots)
-* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
+* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
app-service Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-baseline.md
Monitor attacks against your App Service apps by using a real-time Web Applicati
- [How to use managed identities for App Service and Azure Functions](overview-managed-identity.md?context=azure%2Factive-directory%2Fmanaged-identities-azure-resources%2Fcontext%2Fmsi-context&amp;tabs=dotnet) -- [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
**Azure Security Center monitoring**: Yes
Microsoft manages the underlying platform and treats all customer data as sensit
### 4.6: Use Role-based access control to control access to resources
-**Guidance**: Use role-based access control (Azure RBAC) in Azure Active Directory (Azure AD) to control access to the App Service control plane at the Azure portal.
+**Guidance**: Use Azure role-based access control (Azure RBAC) in Azure Active Directory (Azure AD) to control access to the App Service control plane at the Azure portal.
-- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
**Azure Security Center monitoring**: Currently not available
app-service Troubleshoot Dotnet Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-dotnet-visual-studio.md
Visual Studio provides access to a subset of the app management functions and co
> >
- For more information about connecting to Azure resources from Visual Studio, see [Manage Accounts, Subscriptions, and Administrative Roles](../role-based-access-control/role-assignments-portal.md).
+ For more information about connecting to Azure resources from Visual Studio, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
2. In **Server Explorer**, expand **Azure** and expand **App Service**. 3. Expand the resource group that includes the app that you created in [Create an ASP.NET app in Azure App Service](quickstart-dotnet-framework.md), and then right-click the app node and click **View Settings**.
automation Manage Runas Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
Before granting the Run As account permissions, you need to first note the displ
For detailed steps for how to add role assignments, check out the following articles depending on the method you want to use.
-* [Add Azure role assignment from the Azure portal](../role-based-access-control/role-assignments-portal.md)
-* [Add Azure role assignment using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
-* [Add Azure role assignment using the Azure CLI](../role-based-access-control/role-assignments-cli.md)
-* [Add Azure role assignment using the REST API](..//role-based-access-control/role-assignments-rest.md)
+* [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+* [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+* [Assign Azure roles using the Azure CLI](../role-based-access-control/role-assignments-cli.md)
+* [Assign Azure roles using the REST API](..//role-based-access-control/role-assignments-rest.md)
After assigning the Run As account to the role, in your runbook specify `Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"` to set the subscription context to use. For more information, see [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-backup-config-store.md
az functionapp identity assign --name $functionAppName --resource-group $resourc
``` > [!NOTE]
-> To perform the required resource creation and role management, your account needs `Owner` permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, learn [how to add or remove Azure role assignments by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+> To perform the required resource creation and role management, your account needs `Owner` permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, learn [how to assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
Use the following commands or the [Azure portal](./howto-integrate-azure-managed-service-identity.md#grant-access-to-app-configuration) to grant the managed identity of your function app access to your App Configuration stores. Use these roles: - Assign the `App Configuration Data Reader` role in the primary App Configuration store.
azure-app-configuration Rest Api Authorization Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/rest-api-authorization-azure-ad.md
HTTP/1.1 403 Forbidden
## Managing role assignments
-You can manage role assignments by using [RBAC procedures](../role-based-access-control/overview.md) that are standard across all Azure services. You can do this through the Azure CLI, PowerShell, and the Azure portal. For more information, see [Add or remove Azure role assignments by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+You can manage role assignments by using [Azure RBAC procedures](../role-based-access-control/overview.md) that are standard across all Azure services. You can do this through the Azure CLI, PowerShell, and the Azure portal. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
azure-app-configuration Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-baseline.md
App Configuration supports storing configuration of multiple applications in one
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-service-principal.md
The values from the following properties are used with parameters passed to the
> Make sure to use the service principal **ApplicationId** property, not the **Id** property. >
-The **Azure Connected Machine Onboarding** role contains only the permissions required to onboard a machine. You can assign the service principal permission to allow its scope to include a resource group or a subscription. To add role assignment, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Add or remove Azure role assignments using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+The **Azure Connected Machine Onboarding** role contains only the permissions required to onboard a machine. You can assign the service principal permission to allow its scope to include a resource group or a subscription. To add role assignment, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
## Install the agent and connect to Azure
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
If you would like to test how your code works under error conditions, consider u
* The client VM used for testing should be **in the same region** as your Redis cache instance. * **We recommend using Dv2 VM Series** for your client as they have better hardware and will give the best results. * Make sure the client VM you use has **at least as much compute and bandwidth* as the cache being tested.
+ * **Test under failover conditions** on your cache. It's important to ensure that you don't performance test your cache only under steady state conditions. Also test under failover conditions and measure the CPU / Server Load on your cache during that time. You can initiate a failover by [rebooting the primary node](cache-administration.md#reboot). This will allow you to see how your application behaves in terms of throughput and latency during failover conditions (happens during updates and can happen during an unplanned event). Ideally you dont't want to see CPU / Server Load peak to more than say 80% even during a failover as that can affect performance.
* **Enable VRSS** on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)). Example PowerShell script: >PowerShell -ExecutionPolicy Unrestricted Enable-NetAdapterRSS -Name ( Get-NetAdapter).Name
Test GET requests using a 1k payload.
**To test throughput:** Pipelined GET requests with 1k payload.
-> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
+> redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n 1000000 -d 1024 -P 50 -c 50
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-failover.md
Because you can't avoid failovers completely, write your client applications for
To test a client application's resiliency, use a [reboot](cache-administration.md#reboot) as a manual trigger for connection breaks. Additionally, we recommend that you [schedule updates](cache-administration.md#schedule-updates) on a cache. Tell the management service to apply Redis runtime patches during specified weekly windows. These windows are typically periods when client application traffic is low, to avoid potential incidents.
+### Can I be notified in advance of a planned maintenance?
+
+Azure Cache for Redis now publishes notifications on a publish/subscribe channel called [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md) around 30 seconds before planned updates. These are runtime notifications, and they're built especially for applications that can use circuit breakers to bypass the cache or buffer commands, for example, during planned updates. It's not a mechanism that can notify you days or hours in advance.
+ ### Client network-configuration changes Certain client-side network-configuration changes can trigger "No connection available" errors. Such changes might include:
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 02/03/2021 Last updated : 02/17/2021 # Compare Azure Government and global Azure
You need to open some **outgoing ports** in your server's firewall to allow the
|-||-|--| |Telemetry|dc.applicationinsights.us|23.97.4.113|443|
+### [Azure Lighthouse](../lighthouse/overview.md)
+
+The following Azure Lighthouse **features are not currently available** in Azure Government:
+- Managed Service offers published to Azure Marketplace
+ ### [Azure Monitor](../azure-monitor/logs/data-platform-logs.md) The following Azure Monitor **features are not currently available** in Azure Government:
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-authentication.md
Azure Maps supports access to all principal types for [Azure role-based access c
The next sections discuss concepts and components of Azure Maps integration with Azure RBAC. As part of the process to set up your Azure Maps account, an Azure AD directory is associated to the Azure subscription which the Azure Maps account resides.
-When you configure Azure RBAC, you choose a security principal and apply it to a role assignment. To learn how to add role assignments on the Azure portal, see [Add or remove Azure role assignments](../role-based-access-control/role-assignments-portal.md).
+When you configure Azure RBAC, you choose a security principal and apply it to a role assignment. To learn how to add role assignments on the Azure portal, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
### Picking a role definition
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-spa-app.md
You grant *Azure role-based access control (Azure RBAC)* access by assigning the
2. On the **Role assignments** tab, under **Role**, select a built in Azure Maps role definition such as **Azure Maps Data Reader** or **Azure Maps Data Contributor**. Under **Assign access to**, select **Function App**. Select the principal by name. Then select **Save**.
- * See details on [Add or remove role assignments](../role-based-access-control/role-assignments-portal.md).
+ * See details on [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
> [!WARNING] > Azure Maps built-in role definitions provide a very large authorization access to many Azure Maps REST APIs. To restrict APIs access to a minimum, see [create a custom role definition and assign the system-assigned identity](../role-based-access-control/custom-roles.md) to the custom role definition. This will enable the least privilege necessary for the application to access Azure Maps.
azure-monitor Metrics Dynamic Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-dynamic-scope.md
Some resource types can query for metrics over multiple resources. The metrics m
![Screenshot that shows a menu of resources that are compatible with multiple resources.](./media/metrics-dynamic-scope/020.png) > [!WARNING]
-> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription. For more information, see [Add or remove Azure role assignments by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+> You must have Monitoring Reader permission at the subscription level to visualize metrics across multiple resources, resource groups, or a subscription. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
To visualize metrics over multiple resources, start by selecting multiple resources within the resource scope picker.
In this example, we filter by TailspinToysDemo. Here, the filter removes metrics
## Pin multiple-resource charts
-Multiple-resource charts that visualize metrics across resource groups and subscriptions require the user to have *Monitoring Reader* permission at the subscription level. Ensure that all users of the dashboards to which you pin multiple-resource charts have sufficient permissions. For more information, see [Add or remove Azure role assignments by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Multiple-resource charts that visualize metrics across resource groups and subscriptions require the user to have *Monitoring Reader* permission at the subscription level. Ensure that all users of the dashboards to which you pin multiple-resource charts have sufficient permissions. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
To pin your multiple-resource chart to a dashboard, see [Pinning to dashboards](../essentials/metrics-charts.md#pinning-to-dashboards).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
For important additional information, see [Monitoring Agents Overview](../agents
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|active-timer-count|Yes|System.Runtime|active-timer-count|Count|Average|Number of timers that are currently active|Deployment, AppName, Pod|
-|alloc-rate|Yes|System.Runtime|alloc-rate|Bytes|Average|Number of bytes allocated in the managed heap|Deployment, AppName, Pod|
+|active-timer-count|Yes|active-timer-count|Count|Average|Number of timers that are currently active|Deployment, AppName, Pod|
+|alloc-rate|Yes|alloc-rate|Bytes|Average|Number of bytes allocated in the managed heap|Deployment, AppName, Pod|
|AppCpuUsage|Yes|App CPU Usage (preview)|Percent|Average|The recent CPU usage for the app|Deployment, AppName, Pod|
-|assembly-count|Yes|System.Runtime|assembly-count|Count|Average|Number of Assemblies Loaded|Deployment, AppName, Pod|
-|cpu-usage|Yes|System.Runtime|cpu-usage|Percent|Average|% time the process has utilized the CPU|Deployment, AppName, Pod|
-|current-requests|Yes|Microsoft.AspNetCore.Hosting|current-requests|Count|Average|Total number of requests in processing in the lifetime of the process|Deployment, AppName, Pod|
-|exception-count|Yes|System.Runtime|exception-count|Count|Total|Number of Exceptions|Deployment, AppName, Pod|
-|failed-requests|Yes|Microsoft.AspNetCore.Hosting|failed-requests|Count|Average|Total number of failed requests in the lifetime of the process|Deployment, AppName, Pod|
-|gc-heap-size|Yes|System.Runtime|gc-heap-size|Count|Average|Total heap size reported by the GC (MB)|Deployment, AppName, Pod|
-|gen-0-gc-count|Yes|System.Runtime|gen-0-gc-count|Count|Average|Number of Gen 0 GCs|Deployment, AppName, Pod|
-|gen-0-size|Yes|System.Runtime|gen-0-size|Bytes|Average|Gen 0 Heap Size|Deployment, AppName, Pod|
-|gen-1-gc-count|Yes|System.Runtime|gen-1-gc-count|Count|Average|System.Runtime|Number of Gen 1 GCs|Deployment, AppName, Pod|
-|gen-1-size|Yes|System.Runtime|gen-1-size|Bytes|Average|Gen 1 Heap Size|Deployment, AppName, Pod|
-|gen-2-gc-count|Yes|System.Runtime|gen-2-gc-count|Count|Average|Number of Gen 2 GCs|Deployment, AppName, Pod|
-|gen-2-size|Yes|System.Runtime|gen-2-size|Bytes|Average|Gen 2 Heap Size|Deployment, AppName, Pod|
+|assembly-count|Yes|assembly-count|Count|Average|Number of Assemblies Loaded|Deployment, AppName, Pod|
+|cpu-usage|Yes|cpu-usage|Percent|Average|% time the process has utilized the CPU|Deployment, AppName, Pod|
+|current-requests|Yes|current-requests|Count|Average|Total number of requests in processing in the lifetime of the process|Deployment, AppName, Pod|
+|exception-count|Yes|exception-count|Count|Total|Number of Exceptions|Deployment, AppName, Pod|
+|failed-requests|Yes|failed-requests|Count|Average|Total number of failed requests in the lifetime of the process|Deployment, AppName, Pod|
+|gc-heap-size|Yes|gc-heap-size|Count|Average|Total heap size reported by the GC (MB)|Deployment, AppName, Pod|
+|gen-0-gc-count|Yes|gen-0-gc-count|Count|Average|Number of Gen 0 GCs|Deployment, AppName, Pod|
+|gen-0-size|Yes|gen-0-size|Bytes|Average|Gen 0 Heap Size|Deployment, AppName, Pod|
+|gen-1-gc-count|Yes|gen-1-gc-count|Count|Average|Number of Gen 1 GCs|Deployment, AppName, Pod|
+|gen-1-size|Yes|gen-1-size|Bytes|Average|Gen 1 Heap Size|Deployment, AppName, Pod|
+|gen-2-gc-count|Yes|gen-2-gc-count|Count|Average|Number of Gen 2 GCs|Deployment, AppName, Pod|
+|gen-2-size|Yes|gen-2-size|Bytes|Average|Gen 2 Heap Size|Deployment, AppName, Pod|
|jvm.gc.live.data.size|Yes|jvm.gc.live.data.size|Bytes|Average|Size of old generation memory pool after a full GC|Deployment, AppName, Pod| |jvm.gc.max.data.size|Yes|jvm.gc.max.data.size|Bytes|Average|Max size of old generation memory pool|Deployment, AppName, Pod| |jvm.gc.memory.allocated|Yes|jvm.gc.memory.allocated|Bytes|Maximum|Incremented for an increase in the size of the young generation memory pool after one GC to before the next|Deployment, AppName, Pod|
For important additional information, see [Monitoring Agents Overview](../agents
|jvm.memory.committed|Yes|jvm.memory.committed|Bytes|Average|Memory assigned to JVM in bytes|Deployment, AppName, Pod| |jvm.memory.max|Yes|jvm.memory.max|Bytes|Maximum|The maximum amount of memory in bytes that can be used for memory management|Deployment, AppName, Pod| |jvm.memory.used|Yes|jvm.memory.used|Bytes|Average|App Memory Used in bytes|Deployment, AppName, Pod|
-|loh-size|Yes|System.Runtime|loh-size|Bytes|Average|LOH Heap Size|Deployment, AppName, Pod|
-|monitor-lock-contention-count|Yes|System.Runtime|monitor-lock-contention-count|Count|Average|Number of times there were contention when trying to take the monitor lock|Deployment, AppName, Pod|
+|loh-size|Yes|loh-size|Bytes|Average|LOH Heap Size|Deployment, AppName, Pod|
+|monitor-lock-contention-count|Yes|monitor-lock-contention-count|Count|Average|Number of times there were contention when trying to take the monitor lock|Deployment, AppName, Pod|
|process.cpu.usage|Yes|process.cpu.usage|Percent|Average|The recent CPU usage for the JVM process|Deployment, AppName, Pod|
-|requests-per-second|Yes|Microsoft.AspNetCore.Hosting|requests-rate|Count|Average|Request rate|Deployment, AppName, Pod|
+|requests-per-second|Yes|requests-rate|Count|Average|Request rate|Deployment, AppName, Pod|
|system.cpu.usage|Yes|system.cpu.usage|Percent|Average|The recent CPU usage for the whole system|Deployment, AppName, Pod|
-|threadpool-completed-items-count|Yes|System.Runtime|threadpool-completed-items-count|Count|Average|ThreadPool Completed Work Items Count|Deployment, AppName, Pod|
-|threadpool-queue-length|Yes|System.Runtime|threadpool-queue-length|Count|Average|ThreadPool Work Items Queue Length|Deployment, AppName, Pod|
-|threadpool-thread-count|Yes|System.Runtime|threadpool-thread-count|Count|Average|Number of ThreadPool Threads|Deployment, AppName, Pod|
-|time-in-gc|Yes|System.Runtime|time-in-gc|Percent|Average|% time in GC since the last GC|Deployment, AppName, Pod|
+|threadpool-completed-items-count|Yes|threadpool-completed-items-count|Count|Average|ThreadPool Completed Work Items Count|Deployment, AppName, Pod|
+|threadpool-queue-length|Yes|threadpool-queue-length|Count|Average|ThreadPool Work Items Queue Length|Deployment, AppName, Pod|
+|threadpool-thread-count|Yes|threadpool-thread-count|Count|Average|Number of ThreadPool Threads|Deployment, AppName, Pod|
+|time-in-gc|Yes|time-in-gc|Percent|Average|% time in GC since the last GC|Deployment, AppName, Pod|
|tomcat.global.error|Yes|tomcat.global.error|Count|Total|Tomcat Global Error|Deployment, AppName, Pod| |tomcat.global.received|Yes|tomcat.global.received|Bytes|Total|Tomcat Total Received Bytes|Deployment, AppName, Pod| |tomcat.global.request.avg.time|Yes|tomcat.global.request.avg.time|Milliseconds|Average|Tomcat Request Average Time|Deployment, AppName, Pod|
For important additional information, see [Monitoring Agents Overview](../agents
|tomcat.sessions.rejected|Yes|tomcat.sessions.rejected|Count|Total|Tomcat Session Rejected Count|Deployment, AppName, Pod| |tomcat.threads.config.max|Yes|tomcat.threads.config.max|Count|Total|Tomcat Config Max Thread Count|Deployment, AppName, Pod| |tomcat.threads.current|Yes|tomcat.threads.current|Count|Total|Tomcat Current Thread Count|Deployment, AppName, Pod|
-|total-requests|Yes|Microsoft.AspNetCore.Hosting|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod|
-|working-set|Yes|System.Runtime|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod|
+|total-requests|Yes|total-requests|Count|Average|Total number of requests in the lifetime of the process|Deployment, AppName, Pod|
+|working-set|Yes|working-set|Count|Average|Amount of working set used by the process (MB)|Deployment, AppName, Pod|
## Microsoft.Automation/automationAccounts
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-access.md
The following activities also require Azure permissions:
## Manage access using Azure permissions
-To grant access to the Log Analytics workspace using Azure permissions, follow the steps in [use role assignments to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md). For example custom roles, see [Example custom roles](#custom-role-examples)
+To grant access to the Log Analytics workspace using Azure permissions, follow the steps in [assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md). For example custom roles, see [Example custom roles](#custom-role-examples)
Azure has two built-in user roles for Log Analytics workspaces:
azure-portal Azure Portal Dashboards Create Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-dashboards-create-programmatically.md
After you configure the dashboard, the next step is to publish the dashboard usi
![sharing a dashboard](./media/azure-portal-dashboards-create-programmatically/share-command.png)
-Selecting **Share** prompts you to choose which subscription and resource group to publish to. You must have write access to the subscription and resource group that you choose. For more information, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+Selecting **Share** prompts you to choose which subscription and resource group to publish to. You must have write access to the subscription and resource group that you choose. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
![make changes to sharing and access](./media/azure-portal-dashboards-create-programmatically/sharing-and-access.png)
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
You can verify that the application definition files are saved in your provided
## Make sure users can see your definition
-You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You have access to the managed application definition, but you want to make sure other users in your organization can access it. Grant them at least the Reader role on the definition. They may have inherited this level of access from the subscription or resource group. To check who has access to the definition and add users or groups, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-resource-groups-portal.md
For information about exporting templates, see [Single and multi-resource export
## Manage access to resource groups
-[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-resources-portal.md
You can select the pin icon on the upper right corner of the graphs to pin the g
## Manage access to resources
-[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) is the way that you manage access to resources in Azure. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
There are some important steps to do before moving a resource. By verifying thes
1. If you move a resource that has an Azure role assigned directly to the resource (or a child resource), the role assignment is not moved and becomes orphaned. After the move, you must re-create the role assignment. Eventually, the orphaned role assignment will be automatically removed, but it is a best practice to remove the role assignment before moving the resource.
- For information about how to manage role assignments, see [List Azure role assignments](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope) and [Add or remove Azure role assignments](../../role-based-access-control/role-assignments-portal.md).
+ For information about how to manage role assignments, see [List Azure role assignments](../../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope) and [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md).
1. The source and destination subscriptions must be active. If you have trouble enabling an account that has been disabled, [create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Select **Subscription Management** for the issue type.
azure-resource-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-baseline.md
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Azure AD RBAC to control access to data and resources, otherwise use service-specific access control methods. -- [How to configure RBAC in Azure](../../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../../role-based-access-control/role-assignments-portal.md)
**Azure Security Center monitoring**: Not applicable
azure-resource-manager Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-syntax.md
The following template shows the format for the data types. Each type has a defa
"defaultValue": 1 }, "boolParameter": {
- "type": "bool",
- "defaultValue": true
+ "type": "bool",
+ "defaultValue": true
}, "objectParameter": { "type": "object",
azure-resource-manager Template Tutorial Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-tutorial-use-key-vault.md
By using the static ID method, you don't need to make any changes to the templat
"adminPassword": { "reference": { "keyVault": {
- "id": "/subscriptions/<SubscriptionID>/resourceGroups/mykeyvaultdeploymentrg/providers/Microsoft.KeyVault/vaults/<KeyVaultName>"
+ "id": "/subscriptions/<SubscriptionID>/resourceGroups/mykeyvaultdeploymentrg/providers/Microsoft.KeyVault/vaults/<KeyVaultName>"
}, "secretName": "vmAdminPassword" }
azure-signalr Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authenticate-application.md
You could also upload a certification instead of creating a client secret.
![Upload a Certification](./media/authenticate/certification.png)
-## Add RBAC roles using the Azure portal
-To learn more on managing access to Azure resources using RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
+## Assign Azure roles using the Azure portal
+To learn more on managing access to Azure resources using Azure RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
After you've determined the appropriate scope for a role assignment, navigate to that resource in the Azure portal. Display the access control (IAM) settings for the resource, and follow these instructions to manage role assignments:
azure-signalr Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/authenticate-managed-identity.md
Once you've enabled this setting, a new service identity is created in your Azur
Now, assign this service identity to a role in the required scope in your Azure SignalR Service resources.
-## Assign RBAC roles using the Azure portal
-To learn more on managing access to Azure resources using RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
+## Assign Azure roles using the Azure portal
+To learn more on managing access to Azure resources using Azure RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.md).
After you've determined the appropriate scope for a role assignment, navigate to that resource in the Azure portal. Display the access control (IAM) settings for the resource, and follow these instructions to manage role assignments:
azure-signalr Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-baseline.md
Use built-in roles to allocate permission and only create custom roles when requ
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Last updated 02/17/2021
# Migrate databases from SQL Server to SQL Managed Instance using Log Replay Service [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article explains how to manually configure database migration from SQL Server 2008-2019 to SQL Managed Instance using Log Replay Service (LRS). This is a cloud service enabled for managed instance based on the SQL Server log shipping technology in no recovery mode. LRS should be used in cases when Data Migration Service (DMS) cannot be used, when more control is needed or when there exists little tolerance for downtime.
+This article explains how to manually configure database migration from SQL Server 2008-2019 to SQL Managed Instance using Log Replay Service (LRS). This is a cloud service enabled for Managed Instance based on the SQL Server log shipping technology. LRS should be used in cases when Azure Data Migration Service (DMS) cannot be used, when more control is needed, or when there exists little tolerance for downtime.
## When to use Log Replay Service
-In cases that [Azure DMS](https://docs.microsoft.com/azure/dms/tutorial-sql-server-to-managed-instance) cannot be used for migration, LRS cloud service can be used directly with PowerShell, CLI cmdlets, or API, to manually build and orchestrate database migrations to SQL managed instance.
+In cases that [Azure DMS](https://docs.microsoft.com/azure/dms/tutorial-sql-server-to-managed-instance) cannot be used for migration, LRS cloud service can be used directly with PowerShell, CLI cmdlets, or API, to manually build and orchestrate database migrations to SQL Managed Instance.
You might want to consider using LRS cloud service in some of the following cases: - More control is needed for your database migration project
You might want to consider using LRS cloud service in some of the following case
- No access to host OS is available, or no Administrator privileges > [!NOTE]
-> Recommended automated way to migrate databases from SQL Server to SQL Managed Instance is using Azure DMS. This service is using the same LRS cloud service at the back end with log shipping in no-recovery mode. You should consider manually using LRS to orchestrate migrations in cases when Azure DMS does not fully support your scenarios.
+> Recommended automated way to migrate databases from SQL Server to SQL Managed Instance is using Azure DMS. This service is using the same LRS cloud service at the back end with log shipping in NORECOVERY mode. You should consider manually using LRS to orchestrate migrations in cases when Azure DMS does not fully support your scenarios.
## How does it work Building a custom solution using LRS to migrate a database to the cloud requires several orchestration steps shown in the diagram and outlined in the table below.
-The migration entails making full database backups on SQL Server and copying backup files to Azure Blob storage. LRS is used to restore backup files from Azure Blob storage to SQL managed instance. Azure Blob storage is used as an intermediary storage between SQL Server and SQL Managed Instance.
+The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. LRS is used to restore backup files from Azure Blob Storage to SQL Managed Instance. Azure Blob Storage is used as an intermediary storage between SQL Server and SQL Managed Instance.
-LRS will monitor Azure Blob storage for any new differential, or log backups added after the full backup has been restored, and will automatically restore any new files added. The progress of backup files being restored on SQL managed instance can be monitored using the service, and the process can also be aborted if necessary. Databases being restored during the migration process will be in a restoring mode and cannot be used to read or write until the process has been completed.
+LRS will monitor Azure Blob Storage for any new differential, or log backups added after the full backup has been restored, and will automatically restore any new files added. The progress of backup files being restored on SQL Managed Instance can be monitored using the service, and the process can also be aborted if necessary. Databases being restored during the migration process will be in a restoring mode and cannot be used to read or write until the process has been completed.
LRS can be started in autocomplete, or continuous mode. When started in autocomplete mode, the migration will complete automatically when the last backup file specified has been restored. When started in continuous mode, the service will continuously restore any new backup files added, and the migration will complete on the manual cutover only. The final cutover step will make databases available for read and write use on SQL Managed Instance.
LRS can be started in autocomplete, or continuous mode. When started in autocomp
| Operation | Details | | :-- | :- |
-| **1. Copy database backups from SQL Server to Azure Blob storage**. | - Copy full, differential, and log backups from SQL Server to Azure Blob storage using [Azcopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br />- In migrating several databases, a separate folder is required for each database. |
-| **2. Start the LRS service in the cloud**. | - Service can be started with a choice of cmdlets: <br /> PowerShell [start-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_start cmdlets](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_start). <br /><br />- Once started, the service will take backups from the Azure Blob storage and start restoring them on SQL Managed Instance. <br /> - Once all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder and will continuously apply logs based on the LSN chain, until the service is stopped. |
+| **1. Copy database backups from SQL Server to Azure Blob Storage**. | - Copy full, differential, and log backups from SQL Server to Azure Blob Storage container using [Azcopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10) or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). <br />- In migrating several databases, a separate folder is required for each database. |
+| **2. Start the LRS service in the cloud**. | - Service can be started with a choice of cmdlets: <br /> PowerShell [start-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_start cmdlets](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_start). <br /><br />- Once started, the service will take backups from the Azure Blob Storage container and start restoring them on SQLManaged Instance. <br /> - Once all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder and will continuously apply logs based on the LSN chain, until the service is stopped. |
| **2.1. Monitor the operation progress**. | - Progress of the restore operation can be monitored with a choice of or cmdlets: <br /> PowerShell [get-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_show cmdlets](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_show). | | **2.2. Stop\abort the operation if needed**. | - In case that migration process needs to be aborted, the operation can be stopped with a choice of cmdlets: <br /> PowerShell [stop-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_stop](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_stop) cmdlets. <br /><br />- This will result in deletion of the being database restored on SQL Managed Instance. <br />- Once stopped, LRS cannot be continued for a database. Migration process needs to be restarted from scratch. |
-| **3. Cutover to the cloud when ready**. | - Once all backups have been restored to SQL Managed Instance, complete the cutover by initiating LRS complete operation with a choice of API call, or cmdlets: <br />PowerShell [complete-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_complete](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_complete) cmdlets. <br /><br />- This will cause LRS service to be stopped and database on Managed Instance will be recovered. <br />- Repoint the application connection string from SQL Server to SQL Managed Instance. <br />- On operation completion database is available for R/W operations in the cloud. |
+| **3. Cutover to the cloud when ready**. | - Once all backups have been restored to SQL mnaged instance, complete the cutover by initiating LRS complete operation with a choice of API call, or cmdlets: <br />PowerShell [complete-azsqlinstancedatabaselogreplay](https://docs.microsoft.com/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay) <br /> CLI [az_sql_midb_log_replay_complete](https://docs.microsoft.com/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_complete) cmdlets. <br /><br />- This will cause LRS service to be stopped and database on managed instance will be recovered. <br />- Repoint the application connection string from SQL Server to SQL Managed Instance. <br />- On operation completion database is available for R/W operations in the cloud. |
## Requirements for getting started
LRS can be started in autocomplete, or continuous mode. When started in autocomp
- Full backup of databases (one or multiple files) - Differential backup (one or multiple files) - Log backup (not split for transaction log file)-- **CHECKSUM must be enabled** as mandatory
+- **CHECKSUM must be enabled** for backups (mandatory)
### Azure side-- PowerShell Az.SQL module version 2.16.0, or above ([install](https://www.powershellgallery.com/packages/Az.Sql/), or use Azure [Cloud Shell](https://docs.microsoft.com/azure/cloud-shell/))-- CLI version 2.19.0, or above ([install](https://docs.microsoft.com/cli/azure/install-azure-cli))-- Azure Blob Storage provisioned-- SAS security token with **read** and **list** only permissions generated for the blob storage
+- PowerShell Az.SQL module version 2.16.0, or above ([install](https://www.powershellgallery.com/packages/Az.Sql/), or use Azure [Cloud Shell](https://docs.microsoft.com/azure/cloud-shell/))
+- CLI version 2.19.0, or above ([install](https://docs.microsoft.com/cli/azure/install-azure-cli))
+- Azure Blob Storage container provisioned
+- SAS security token with **Read** and **List** only permissions generated for the blob storage container
## Best practices
The following are highly recommended as best practices:
- Plan to complete the migration within 47 hours since LRS service has been started. > [!IMPORTANT]
-> - Database being restored using LRS cannot be used until the migration process has been completed. This is because underlying technology is log shipping in no recovery mode.
-> - Standby mode for log shipping is not supported by LRS due to the version differences between SQL Managed Instance and latest in-market SQL Server version.
+> - Database being restored using LRS cannot be used until the migration process has been completed. This is because underlying technology is log shipping in NORECOVERY mode.
+> - STANDBY mode for log shipping is not supported by LRS due to the version differences between SQL Managed Instance and latest in-market SQL Server version.
## Steps to execute
-## Copy backups from SQL Server to Azure Blob storage
+## Copy backups from SQL Server to Azure Blob Storage
-The following two approaches can be utilized to copy backups to the blob storage in migrating databases to Managed Instance using LRS:
+The following two approaches can be utilized to copy backups to the blob storage in migrating databases to managed instance using LRS:
- Using SQL Server native [BACKUP TO URL](https://docs.microsoft.com/sql/relational-databases/backup-restore/sql-server-backup-to-url) functionality. - Copying the backups to Blob Container using [Azcopy](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10), or [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer). ## Create Azure Blob and SAS authentication token
-Azure Blob storage is used as an intermediary storage for backup files between SQL Server and SQL Managed Instance. Follow these steps to create Azure Blob storage container:
+Azure Blob Storage is used as an intermediary storage for backup files between SQL Server and SQL Managed Instance. Follow these steps to create Azure Blob Storage container:
1. [Create a storage account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal) 2. [Crete a blob container](https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-portal) inside the storage account
Once a blob container has been created, generate SAS authentication token with R
9. Copy the token starting with "sv=" in the URI for use in your code > [!IMPORTANT]
-> Permissions for the SAS token for Azure Blob storage need to be Read and List only. In case of any other permissions granted for the SAS authentication token, starting LRS service will fail. These security requirements are by design.
+> Permissions for the SAS token for Azure Blob Storage need to be Read and List only. In case of any other permissions granted for the SAS authentication token, starting LRS service will fail. These security requirements are by design.
## Log in to Azure and select subscription
az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb
## Stop the migration
-In case you need to stop the migration, use the following cmdlets. Stopping the migration will delete the restoring database on SQL managed instance due to which it will not be possible to resume the migration.
+In case you need to stop the migration, use the following cmdlets. Stopping the migration will delete the restoring database on SQL Managed Instance due to which it will not be possible to resume the migration.
To stop\abort the migration process, use the following PowerShell command:
To complete the migration process in LRS continuous mode, use the following Powe
```powershell Complete-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" ` -InstanceName "ManagedInstance01" `--Name "ManagedDatabaseName" -LastBackupName "last_backup.bak"
+-Name "ManagedDatabaseName" `
+-LastBackupName "last_backup.bak"
``` To complete the migration process in LRS continuous mode, use the following CLI command:
azure-sql User Initiated Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/user-initiated-failover.md
Previously updated : 01/26/2021 Last updated : 02/17/2021 # User-initiated manual failover on SQL Managed Instance This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP) and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the BC service tier only.
blockchain Configure Transaction Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/configure-transaction-nodes.md
To grant Azure AD access control to your endpoint:
1. Select **Save** to add the role assignment.
-For more information on Azure AD access control, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+For more information on Azure AD access control, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
For details on how to connect using Azure AD authentication, see [connect to your node using AAD authentication](configure-aad.md).
cdn Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/security-baseline.md
Additionally, use built-in roles to allocate permission and only create custom r
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
cognitive-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-collaborate.md
You have migrated if your LUIS authoring experience is tied to an Authoring reso
When the user's email is found, select the account and select **Save**.
- If you have trouble with this role assignment, review [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) and [Azure access control troubleshooting](../../role-based-access-control/troubleshooting.md#problems-with-azure-role-assignments).
+ If you have trouble with this role assignment, review [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md) and [Azure access control troubleshooting](../../role-based-access-control/troubleshooting.md#problems-with-azure-role-assignments).
## View the app as a contributor
cognitive-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/azure-resources.md
Your QnA Maker service deals with two kinds of keys: **authoring keys** and **qu
Use these keys when making requests to the service through APIs.
-![Key management](../media/qnamaker-how-to-key-management/key-management.png)
+![Key management](../media/authoring-key.png)
|Name|Location|Purpose| |--|--|--|
-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys** page.|
+|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
|Query endpoint key|[QnA Maker portal](https://www.qnamaker.ai)|These keys are used to query the published knowledge base endpoint to get a response for a user question. You typically use this query endpoint in your chat bot or in the client application code that connects to the QnA Maker service. These keys are created when you publish your QnA Maker knowledge base.<br><br>Find these keys in the **Service settings** page. Find this page from the user's menu in the upper right of the page on the drop-down menu.| ### Find authoring keys in the Azure portal
-You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker resource. These keys may be referred to as subscription keys.
+You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker resource.
1. Go to the QnA Maker resource in the Azure portal and select the resource that has the _Cognitive Services_ type:
Use these keys when making requests to the service through APIs.
|Name|Location|Purpose| |--|--|--|
-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys** page.|
+|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Cognitive Services** resource on the **Keys and Endpoint** page.|
|Azure Cognitive Search Admin Key|[Azure portal](../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure cognitive search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure cognitive search with the QnA Maker managed (Preview) service, the admin key is automatically passed on to the QnA Maker service. <br><br>You can find these keys on the **Azure Cognitive Search** resource on the **Keys** page.| ### Find authoring keys in the Azure portal
-You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker managed (Preview) resource. These keys may be referred to as subscription keys.
+You can view and reset your authoring keys from the Azure portal, where you created the QnA Maker managed (Preview) resource.
1. Go to the QnA Maker managed (Preview) resource in the Azure portal and select the resource that has the *Cognitive Services* type:
With QnA Maker managed (Preview) you have a choice to setup your QnA Maker servi
## Next steps
-* Learn about the QnA Maker [knowledge base](../index.yml)
+* Learn about the QnA Maker [knowledge base](../How-To/manage-knowledge-bases.md)
* Understand a [knowledge base life cycle](development-lifecycle-knowledge-base.md) * Review service and knowledge base [limits](../limits.md)
cognitive-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/set-up-qnamaker-service-azure.md
This procedure creates the Azure resources needed to manage the knowledge base c
* Select the IPs of "CognitiveServicesManagement". * Navigate to the networking section of your App Service resource, and click on "Configure Access Restriction" option to add the IPs to an allowlist.
+ ![inbound port exceptions](../media/inbound-ports.png)
+ We also have an automated script to do the same for your App Service. You can find the [PowerShell script to configure an allowlist](https://github.com/pchoudhari/QnAMakerBackupRestore/blob/master/AddRestrictedIPAzureAppService.ps1) on GitHub. You need to input subscription id, resource group and actual App Service name as script parameters. Running the script will automatically add the IPs to App Service allowlist. ##### Configure App Service Environment to host QnA Maker App Service
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new with QnA Maker.
### October 2019
-* [Explicitly setting the language](./index.yml) for all knowledge bases in the QnA Maker service.
+* Explicitly setting the language for all knowledge bases in the QnA Maker service.
### September 2019
-* Import and export with [XLS file format](./index.yml)
+* Import and export with XLS file format.
### June 2019
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Neural voices can be used to make interactions with chatbots and voice assistant
| Finnish (Finland) | `fi-FI` | Female | `fi-FI-SelmaNeural` <sup>New</sup> | General | | Finnish (Finland) | `fi-FI` | Male | `fi-FI-HarriNeural` <sup>New</sup> | General | | French (Canada) | `fr-CA` | Female | `fr-CA-SylvieNeural` | General |
+| French (Canada) | `fr-CA` | Male | `fr-CA-AntoineNeural` <sup>New</sup> | General |
| French (Canada) | `fr-CA` | Male | `fr-CA-JeanNeural` | General | | French (France) | `fr-FR` | Female | `fr-FR-DeniseNeural` | General | | French (France) | `fr-FR` | Male | `fr-FR-HenriNeural` | General |
See the following table for supported languages for the various Speaker Recognit
## Next steps * [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
-* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-chsarp)
+* [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-chsarp)
cognitive-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/deploy-label-tool.md
Follow these steps to create a new resource using the Azure portal:
> ![Select Docker](./media/quickstarts/select-docker.png) 6. Now let's configure your Docker container. All fields are required unless otherwise noted:-
- # [v2.0](#tab/v2-0)
+<!-- markdownlint-disable MD025 -->
+# [v2.1 preview](#tab/v2-1)
* Options - Select **Single Container** * Image Source - Select **Private Registry** * Server URL - Set this to `https://mcr.microsoft.com` * Username (Optional) - Create a username. * Password (Optional) - Create a secure password that you'll remember.
-* Image and tag - Set this to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest`
+* Image and tag - Set this to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview`
* Continuous Deployment - Set this to **On** if you want to receive automatic updates when the development team makes changes to the sample labeling tool. * Startup command - Set this to `./run.sh eula=accept`
- # [v2.1 preview](#tab/v2-1)
+# [v2.0](#tab/v2-0)
* Options - Select **Single Container** * Image Source - Select **Private Registry** * Server URL - Set this to `https://mcr.microsoft.com` * Username (Optional) - Create a username. * Password (Optional) - Create a secure password that you'll remember.
-* Image and tag - Set this to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview`
+* Image and tag - Set this to `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest`
* Continuous Deployment - Set this to **On** if you want to receive automatic updates when the development team makes changes to the sample labeling tool. * Startup command - Set this to `./run.sh eula=accept`
-
+
> [!div class="mx-imgBorder"] > ![Configure Docker](./media/quickstarts/configure-docker.png)
Follow these steps to create a new resource using the Azure portal:
> [!IMPORTANT] > You may need to enable TLS for your web app in order to view it at its `https` address. Follow the instructions in [Enable a TLS endpoint](../../container-instances/container-instances-container-group-ssl.md) to set up a sidecar container than enables TLS/SSL for your web app.-
+<!-- markdownlint-disable MD001 -->
### Azure CLI As an alternative to using the Azure portal, you can create a resource using the Azure CLI. Before you continue, you'll need to install the [Azure CLI](/cli/azure/install-azure-cli). You can skip this step if you're already working with the Azure CLI.
There's a few things you need know about this command:
* `DNS_NAME_LABEL=aci-demo-$RANDOM` generates a random DNS name. * This sample assumes that you have a resource group that you can use to create a resource. Replace `<resource_group_name>` with a valid resource group associated with your subscription.
-* You'll need to specify where you want to create the resource. Replace `<region name>` with your desired region for the web app.
+* You'll need to specify where you want to create the resource. Replace `<region name>` with your desired region for the web app.
* This command automatically accepts EULA. From the Azure CLI, run this command to create a web app resource for the sample labeling tool:
-# [v2.0](#tab/v2-0)
+<!-- markdownlint-disable MD024 -->
+# [v2.1 preview](#tab/v2-1)
```azurecli DNS_NAME_LABEL=aci-demo-$RANDOM
DNS_NAME_LABEL=aci-demo-$RANDOM
az container create \ --resource-group <resource_group_name> \ --name <name> \
- --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool \
+ --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview \
--ports 3000 \ --dns-name-label $DNS_NAME_LABEL \ --location <region name> \ --cpu 2 \ --memory 8 \ --command-line "./run.sh eula=accept"
-`
-# [v2.1 preview](#tab/v2-1)
-
+```
+
+# [v2.0](#tab/v2-0)
++ ```azurecli DNS_NAME_LABEL=aci-demo-$RANDOM az container create \ --resource-group <resource_group_name> \ --name <name> \
- --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview \
+ --image mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool \
--ports 3000 \ --dns-name-label $DNS_NAME_LABEL \ --location <region name> \ --cpu 2 \ --memory 8 \ --command-line "./run.sh eula=accept"
-```
+```
+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
Azure Form Recognizer is a cognitive service that lets you build automated data
Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts and business cards, and the layout model. You can call Form Recognizer models by using a REST API or client library SDKs to reduce complexity and integrate it into your workflow or application. Form Recognizer is composed of the following + * **[Layout API](#layout-api)** - Extract text, selection marks, and tables structures, along with their bounding box coordinates, from documents. * **[Custom models](#custom-models)** - Extract text, key/value pairs, selection marks, and table data from forms. These models are trained with your own data, so they're tailored to your forms. * **[Prebuilt models](#prebuilt-models)** - Extract data from unique form types using prebuilt models. Currently available are the following prebuilt models
- * [Invoices](./concept-invoices.md)
- * [Sales receipts](./concept-receipts.md)
- * [Business cards](./concept-business-cards.md)
-
+ * [Invoices](./concept-invoices.md)
+ * [Sales receipts](./concept-receipts.md)
+ * [Business cards](./concept-business-cards.md)
## Try it out To try out the Form Recognizer Service, go to the online Sample UI Tool:
+<!-- markdownlint-disable MD025 -->
+# [v2.1 preview](#tab/v2-1)
+> [!div class="nextstepaction"]
+> [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
# [v2.0](#tab/v2-0)
-> [!div class="nextstepaction"]
-> [Try Form Recognizer](https://fott.azurewebsites.net/)
-# [v2.1 preview](#tab/v2-1)
> [!div class="nextstepaction"]
-> [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
+> [Try Form Recognizer](https://fott.azurewebsites.net/)
You'll use the following APIs to train models and extract structured data from f
| **Analyze Receipt** | Analyze a receipt document to extract key information, and other receipt text.| | **Analyze Business Card** | Analyze a business card to extract key information and text.|
+# [v2.1 preview](#tab/v2-1)
+Explore the [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
# [v2.0](#tab/v2-0) Explore the [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
-# [v2.1](#tab/v2-1)
-Explore the [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-2/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
- ## Input requirements
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/label-tool.md
Title: "Quickstart: Label forms, train a model, and analyze forms using the sample labeling tool - Form Recognizer" description: In this quickstart, you'll use the Form Recognizer sample labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.-+ - Last updated 01/29/2021-+ keywords: document processing -
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD034 -->
# Train a Form Recognizer model with labels using the sample labeling tool In this quickstart, you'll use the Form Recognizer REST API with the sample labeling tool to train a custom document processing model with manually labeled data. See the [Train with labels](../overview.md#train-with-labels) section of the overview to learn more about supervised learning with Form Recognizer.
To complete this quickstart, you must have:
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource <span class="docon docon-navigate-external x-hidden-focus"></span></a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
- * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+ * You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
* A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account. ## Create a Form Recognizer resource
To complete this quickstart, you must have:
To try out the Form Recognizer Sample Labeling Tool online, go to the [FOTT website](https://fott-preview.azurewebsites.net/).
-# [v2.0](#tab/v2-0)
-> [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott.azurewebsites.net/)
+### [v2.1 preview](#tab/v2-1)
-# [v2.1 preview](#tab/v2-1)
> [!div class="nextstepaction"] > [Try Prebuilt Models](https://fott-preview.azurewebsites.net/) -
+### [v2.0](#tab/v2-0)
-You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
+> [!div class="nextstepaction"]
+> [Try Prebuilt Models](https://fott.azurewebsites.net/)
++
+You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
## Set up the sample labeling tool You'll use the Docker engine to run the sample labeling tool. Follow these steps to set up the Docker container. For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/). > [!TIP]
-> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [FOTT website](https://fott.azurewebsites.net/).
+> The OCR Form Labeling Tool is also available as an open source project on GitHub. The tool is a TypeScript web application built using React + Redux. To learn more or contribute, see the [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md#run-as-web-application) repo. To try out the tool online, go to the [FOTT website](https://fott.azurewebsites.net/).
-1. First, install Docker on a host computer. This guide will show you how to use local computer as a host. If you want to use a Docker hosting service in Azure, see the [Deploy the sample labeling tool](../deploy-label-tool.md) how-to guide.
+1. First, install Docker on a host computer. This guide will show you how to use local computer as a host. If you want to use a Docker hosting service in Azure, see the [Deploy the sample labeling tool](../deploy-label-tool.md) how-to guide.
The host computer must meet the following hardware requirements:
You'll use the Docker engine to run the sample labeling tool. Follow these steps
|:--|:--|:--| |Sample labeling tool|2 core, 4-GB memory|4 core, 8-GB memory|
- Install Docker on your machine by following the appropriate instructions for your operating system:
+ Install Docker on your machine by following the appropriate instructions for your operating system:
+ * [Windows](https://docs.docker.com/docker-for-windows/) * [macOS](https://docs.docker.com/docker-for-mac/) * [Linux](https://docs.docker.com/install/)
+1. Get the sample labeling tool container with the `docker pull` command.
+
+### [v2.1 preview](#tab/v2-1)
+```console
+ docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview
+```
-1. Get the sample labeling tool container with the `docker pull` command.
+### [v2.0](#tab/v2-0)
+
+```console
+docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
+```
- # [v2.0](#tab/v2-0)
- ```
- docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
- ```
- # [v2.1 preview](#tab/v2-1)
- ```
- docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview
- ```
+
+</br>
+ 3. Now you're ready to run the container with `docker run`.
+
+### [v2.1 preview](#tab/v2-1)
-
+```console
+ docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview eula=accept
+```
-1. Now you're ready to run the container with `docker run`.
+### [v2.0](#tab/v2-0)
- # [v2.0](#tab/v2-0)
- ```
- docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool eula=accept
- ```
- # [v2.1 preview](#tab/v2-1)
- ```
- docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview eula=accept
- ```
+```console
+docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool eula=accept
+```
-
+ This command will make the sample labeling tool available through a web browser. Go to `http://localhost:3000`.
First, make sure all the training documents are of the same format. If you have
Enable CORS on your storage account. Select your storage account in the Azure portal and click the **CORS** tab on the left pane. On the bottom line, fill in the following values. Then click **Save** at the top.
-* Allowed origins = *
+* Allowed origins = *
* Allowed methods = \[select all\] * Allowed headers = *
-* Exposed headers = *
+* Exposed headers = *
* Max age = 200 > [!div class="mx-imgBorder"]
When you create or open a project, the main tag editor window opens. The tag edi
* A resizable preview pane that contains a scrollable list of forms from the source connection. * The main editor pane that allows you to apply tags.
-* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
+* The tags editor pane that allows users to modify, lock, reorder, and delete tags.
### Identify text elements
It will also show which tables have been automatically extracted. Click on the t
Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze.
-# [v2.0](#tab/v2-0)
-1. First, use the tags editor pane to create the tags you'd like to identify.
- 1. Click **+** to create a new tag.
- 1. Enter the tag name.
- 1. Press Enter to save the tag.
-1. In the main editor, click to select words from the highlighted text elements.
+### [v2.1 preview](#tab/v2-1)
+
+1. First, use the tags editor pane to create the tags you'd like to identify:
+ * Click **+** to create a new tag.
+ * Enter the tag name.
+ * Press Enter to save the tag.
+1. In the main editor, click to select words from the highlighted text elements. In the _v2.1 preview.2_ API, you can also click to select _Selection Marks_ like radio buttons and checkboxes as key value pairs. Form Recognizer will identify whether the selection mark is "selected" or "unselected" as the value.
1. Click on the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. > [!Tip]
- > Keep the following tips in mind when you're labeling your forms.
+ > Keep the following tips in mind when you're labeling your forms:
+ >
> * You can only apply one tag to each selected text element. > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on. > * Tags cannot span across pages.
Next, you'll create tags (labels) and apply them to the text elements that you w
> * Table data should be detected automatically and will be available in the final output JSON file. However, if the model fails to detect all of your table data, you can manually tag these fields as well. Tag each cell in the table with a different label. If your forms have tables with varying numbers of rows, make sure you tag at least one form with the largest possible table. > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags. > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.
+ >
+### [v2.0](#tab/v2-0)
-# [v2.1 preview](#tab/v2-1)
1. First, use the tags editor pane to create the tags you'd like to identify. 1. Click **+** to create a new tag. 1. Enter the tag name. 1. Press Enter to save the tag.
-1. In the main editor, click to select words from the highlighted text elements. In the _v2.1 preview.2_ API, you can also click to select _Selection Marks_ like radio buttons and checkboxes as key value pairs. Form Recognizer will identify whether the selection mark is "selected" or "unselected" as the value.
+1. In the main editor, click to select words from the highlighted text elements.
1. Click on the tag you want to apply, or press the corresponding keyboard key. The number keys are assigned as hotkeys for the first 10 tags. You can reorder your tags using the up and down arrow icons in the tag editor pane. > [!Tip]
- > Keep the following tips in mind when you're labeling your forms.
+ > Keep the following tips in mind when you're labeling your forms:
+ >
> * You can only apply one tag to each selected text element. > * Each tag can only be applied once per page. If a value appears multiple times on the same form, create different tags for each instance. For example: "invoice# 1", "invoice# 2" and so on. > * Tags cannot span across pages.
Next, you'll create tags (labels) and apply them to the text elements that you w
> * Table data should be detected automatically and will be available in the final output JSON file. However, if the model fails to detect all of your table data, you can manually tag these fields as well. Tag each cell in the table with a different label. If your forms have tables with varying numbers of rows, make sure you tag at least one form with the largest possible table. > * Use the buttons to the right of the **+** to search, rename, reorder, and delete your tags. > * To remove an applied tag without deleting the tag itself, select the tagged rectangle on the document view and press the delete key.-
+>
:::image type="content" source="../media/label-tool/main-editor-2-1.png" alt-text="Main editor window of sample labeling tool."::: - Follow the steps above to label at least five of your forms. ### Specify tag value types
Optionally, you can set the expected data type for each tag. Open the context me
> ![Value type selection with sample labeling tool](../media/whats-new/value-type.png) The following value types and variations are currently supported:+ * `string`
- * default, `no-whitespaces`, `alphanumeric`
+ * default, `no-whitespaces`, `alphanumeric`
+ * `number`
- * default, `currency`
-* `date`
- * default, `dmy`, `mdy`, `ymd`
+ * default, `currency`
+
+* `date`
+ * default, `dmy`, `mdy`, `ymd`
+ * `time` * `integer` * `selectionMark` ΓÇô _New in v2.1-preview.1!_ > [!NOTE] > See these rules for date formatting:
->
+>
> You must specify a format (`dmy`, `mdy`, `ymd`) for date formatting to work. > > The following characters can be used as date delimiters: `, - / . \`. Whitespace cannot be used as a delimiter. For example:
+>
> * 01,01,2020 > * 01-01-2020 > * 01/01/2020 > > The day and month can each be written as one or two digits, and the year can be two or four digits:
+>
> * 1-1-2020 > * 1-01-20 > > If a date string has eight digits, the delimiter is optional:
+>
> * 01012020 > * 01 01 2020 > > The month can also be written as its full or short name. If the name is used, delimiter characters are optional. However, this format may be recognized less accurately than others.
+>
> * 01/Jan/2020 > * 01Jan2020 > * 01 Jan 2020
After training finishes, examine the **Average Accuracy** value. If it's low, yo
## Compose trained models
-# [v2.0](#tab/v2-0)
-
-This feature is currently available in v2.1. preview.
-
-# [v2.1 preview](#tab/v2-1)
+### [v2.1 preview](#tab/v2-1)
With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with this composed model ID, Form Recognizer will first classify the form you submitted, matching it to the best matching model, and then return results for that model. This is useful when incoming forms may belong to one of several templates.
-To compose models in the sample labeling tool, click on the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
-Click on the "Compose" button. In the pop-up, name your new composed model and click "Compose". When the operation completes, your new composed model should appear in the list.
+To compose models in the sample labeling tool, click on the Model Compose (merging arrow) icon on the left. On the left, select the models you wish to compose together. Models with the arrows icon are already composed models.
+Click on the "Compose" button. In the pop-up, name your new composed model and click "Compose". When the operation completes, your new composed model should appear in the list.
:::image type="content" source="../media/label-tool/model-compose.png" alt-text="Model compose UX view.":::
+### [v2.0](#tab/v2-0)
+
+This feature is currently available in v2.1. preview.
+
-## Analyze a form
+## Analyze a form
Click on the Predict (light bulb) icon on the left to test your model. Upload a form document that you haven't used in the training process. Then click the **Predict** button on the right to get key/value predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
The reported average accuracy, confidence scores, and actual accuracy can be inc
## Save a project and resume later
-To resume your project at another time or in another browser, you need to save your project's security token and reenter it later.
+To resume your project at another time or in another browser, you need to save your project's security token and reenter it later.
### Get project credentials+ Go to your project settings page (slider icon) and take note of the security token name. Then go to your application settings (gear icon), which shows all of the security tokens in your current browser instance. Find your project's security token and copy its name and key value to a secure location. ### Restore project credentials
-When you want to resume your project, you first need to create a connection to the same blob storage container. Repeat the steps above to do this. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Then click Save Settings.
+
+When you want to resume your project, you first need to create a connection to the same blob storage container. Repeat the steps above to do this. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Then click Save Settings.
### Resume a project
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
See the [Communication Services Chat client library Overview](./sdk-features.md)
## Chat overview
-Chat conversations happen within chat threads. A chat thread can contain many messages and many users. Every message belongs to a single thread, and a user can be a part of one or many threads.
+Chat conversations happen within chat threads. A chat thread can contain many messages and many users. Every message belongs to a single thread, and a user can be a part of one or many threads. Each user in the chat thread is called a participant. Only thread participants can send and receive messages and add or remove other users in a chat thread. Communication Services stores chat history until you execute a delete operation on the chat thread or message, or until no participants are remaining in the chat thread, at which point, the chat thread is orphaned and queued for deletion.
-Each user in the chat thread is called a member. You can have up to 250 members in a chat thread. Only thread members can send and receive messages or add/remove members in a chat thread. The maximum message size allowed is approximately 28KB. You can retrieve all messages in a chat thread using the `List/Get Messages` operation. Communication Services stores chat history until you execute a delete operation on the chat thread or message, or until no members are remaining in the chat thread at which point it is orphaned and processed for deletion.
+## Service limits
-For chat threads with more than 20 members, read receipts and typing indicator features are disabled.
+- The maximum number of participants allowed in a chat thread is 250.
+- The maximum message size allowed is approximately 28 KB.
+- For chat threads with more than 20 participants, read receipts and typing indicator features aren't supported.
## Chat architecture
There are two core parts to chat architecture: 1) Trusted Service and 2) Client
:::image type="content" source="../../media/chat-architecture.png" alt-text="Diagram showing Communication Services' chat architecture.":::
+ - **Trusted service:** To properly manage a chat session, you need a service that helps you connect to Communication Services by using your resource connection string. This service is responsible for creating chat threads, managing thread participant lists, and providing access tokens to users. More information about access tokens can be found in our [access tokens](../../quickstarts/access-tokens.md) quickstart.
- **Client app:** The client application connects to your trusted service and receives the access tokens that are used to connect directly to Communication Services. After this connection is made, your client app can send and receive messages.
We recommend generating access tokens using the trusted service tier. In this sc
## Message types
-Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a member was added or removed or when the chat thread topic was updated. Supported message types are:
+Communication Services Chat shares user-generated messages as well as system-generated messages called **Thread activities**. Thread activities are generated when a chat thread is updated. When you call `List Messages` or `Get Messages` on a chat thread, the result will contain the user-generated text messages as well as the system messages in chronological order. This helps you identify when a participant was added or removed or when the chat thread topic was updated. Supported message types are:
- `Text`: A plain text message composed and sent by a user as part of a chat conversation. - `RichText/HTML`: A formatted text message. Note that Communication Services users currently can't send RichText messages. This message type is supported by messages sent from Teams users to Communication Services users in Teams Interop scenarios.
Communication Services Chat shares user-generated messages as well as system-gen
The Chat JavaScript client library includes real-time signaling. This allows clients to listen for real-time updates and incoming messages to a chat thread without having to poll the APIs. Available events include:
+ - `ChatMessageReceived` - when a new message is sent to a chat thread. This event is not sent for auto generated system messages which were discussed in the previous topic.
+ - `ChatMessageEdited` - when a message is edited in a chat thread.
+ - `ChatMessageDeleted` - when a message is deleted in a chat thread.
+ - `TypingIndicatorReceived` - when another participant is typing a message in a chat thread.
+ - `ReadReceiptReceived` - when another participant has read the message that a user sent in a chat thread.
+ - `ChatThreadCreated` - when a chat thread is created by a communication user.
+ - `ChatThreadDeleted` - when a chat thread is deleted by a communication user.
+ - `ChatThreadPropertiesUpdated` - when chat thread properties are updated; currently, we support only updating the topic for the thread.
+ - `ParticipantsAdded` - when a user is added as participant to a chat thread.
+ - `ParticipantsRemoved` - when an existing participant is removed from the chat thread.
## Chat events
You can use [Azure Cognitive APIs](../../../cognitive-services/index.yml) with t
- Help a support agent prioritize tickets by detecting a negative sentiment of an incoming issue from a customer. - Analyze the incoming messages for key detection and entity recognition, and prompt relevant info to the user in your app based on the message content.
-One way to achieve this is by having your trusted service act as a member of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other members [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
+One way to achieve this is by having your trusted service act as a participant of a chat thread. Let's say you want to enable language translation. This service will be responsible for listening to the messages being exchanged by other participants [1], calling cognitive APIs to translate the content to desired language[2,3] and sending the translated result as a message in the chat thread[4].
This way, the message history will contain both original and translated messages. In the client application, you can add logic to show the original or translated message. See [this quickstart](../../../cognitive-services/translator/quickstart-translator.md) to understand how to use Cognitive APIs to translate text to different languages.
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
Azure Communication Services Chat client libraries can be used to add rich, real
The following list presents the set of features which are currently available in the Communication Services chat client libraries.
-| Group of features | Capability | JS | Java | .NET | Python |
-| -- | - | | -- | - | -- |
-| Core Capabilities | Create a chat thread between 2 or more users (up to 250 users) | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Update the topic of a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Add or remove members from a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Choose whether to share chat message history with newly added members - *all/none/up to certain time* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get a list of all chat members thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Delete a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get a list of a user's chat thread memberships | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get info for a particular chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Edit the content of a message after it's been sent | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Delete a message | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Tag a message with priority as normal or high at the time of sending | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive read receipts for messages that have been read by members <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send and receive typing notifications when a member is actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Get all messages in a chat thread <br/> *Unicode emojis supported* | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Send emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ |
-|Real-time signaling (enabled by proprietary signalling package**)| Get notified when a user receives a new message in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when a message has been edited by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when a message has been deleted by another member in a chat thread they're a member of | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when another chat thread member is typing | ✔️ | ❌ | ❌ | ❌ |
-| | Get notified when another member has read a message (read receipt) in the chat thread | ✔️ | ❌ | ❌ | ❌ |
-| Events | Use Event Grid to subscribe to user activity happening in chat threads and integrate custom notification services or business logic | ✔️ | ✔️ | ✔️ | ✔️ |
-| Monitoring | Monitor usage in terms of messages sent | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Monitor the quality and status of API requests made by your app and configure alerts via the portal | ✔️ | ✔️ | ✔️ | ✔️ |
-|Additional features | Use [Cognitive Services APIs](../../../cognitive-services/index.yml) along with chat client library to enable intelligent features - *language translation & sentiment analysis of the incoming message on a client, speech to text conversion to compose a message while the member speaks, etc.* | ✔️ | ✔️ | ✔️ | ✔️ |
+| Group of features | Capability | JavaScript | Java | .NET | Python | iOS | Android |
+| -- | - | | -- | - | -- | - | -- |
+| Core Capabilities | Create a chat thread between 2 or more users (up to 250 users) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Update the topic of a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Add or remove participants from a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Choose whether to share chat message history with the participant being added | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get a list of participants in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Delete a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Given a communication user, get the list of chat threads the user is part of | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get info for a particular chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send and receive messages in a chat thread | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Edit the contents of a sent message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Delete a message | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Read receipts for messages that have been read by other participants in a chat <br/> *Not available when there are more than 20 participants in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get notified when participants are actively typing a message in a chat thread <br/> *Not available when there are more than 20 members in a chat thread* | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Get all messages in a chat thread <br/> | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Send Unicode emojis as part of message content | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+|Real-time signaling (enabled by proprietary signaling package**)| Subscribe to get real-time updates for incoming messages and other operations in your chat app. To see a list of supported updates for real-time signaling, see [Chat concepts](concepts.md#real-time-signaling) | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Event Grid support | Use integration with Azure Event Grid and configure your communication service to execute business logic based on chat activity or to plug in a custom push notification service | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Monitoring | Use the API request metrics emitted in the Azure portal to build dashboards, monitor the health of your chat app, and set alerts to detect abnormalities | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Configure your Communication Services resource to receive chat operational logs for monitoring and diagnostic purposes | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+ **The proprietary signalling package is implemented using web sockets. It will fallback to long polling if web sockets are unsupported.
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-office365-outlook.md
If you try connecting to Outlook by using a different account than the one curre
1. On your logic app's resource group menu, select **Access control (IAM)**. Set up the other account with the **Contributor** role.
- For more information, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+ For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
1. After you set up this role, sign in to the Azure portal with the account that now has Contributor permissions. You can now use this account to create the connection to Outlook.
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/billing-subscription-transfer.md
Visual Studio and Microsoft Partner Network subscriptions have monthly recurring
If you've accepted the billing ownership of an Azure subscription, we recommend you review these next steps:
-1. Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
1. Update credentials associated with this subscription's services including: 1. Management certificates that grant the user admin rights to subscription resources. For more information, see [Create and upload a management certificate for Azure](../../cloud-services/cloud-services-certs-create.md) 1. Access keys for services like Storage. For more information, see [About Azure storage accounts](../../storage/common/storage-account-create.md)
If you have questions or need help, [create a support request](https://go.micro
## Next steps -- Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+- Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
cost-management-billing Enterprise Mgmt Grp Troubleshoot Cost View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/enterprise-mgmt-grp-troubleshoot-cost-view.md
If you get an error message stating **This asset is unavailable** when trying to
![Screenshot that shows "asset is unavailable" message.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/asset-not-found.png)
-Ask your Azure subscription or management group administrator for access. For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Ask your Azure subscription or management group administrator for access. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps - If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/manage-billing-access.md
Account administrator can grant others access to Azure billing information by as
These roles have access to billing information in the [Azure portal](https://portal.azure.com/). People that are assigned these roles can also use the [Billing APIs](consumption-api-overview.md#usage-details-api) to programmatically get invoices and usage details.
-To assign roles, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+To assign roles, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
** If you're an EA customer, an Account Owner can assign the above role to other users of their team. But for these users to view billing information, the Enterprise Administrator must enable AO view charges in the Enterprise portal.
The Billing Reader feature is in preview, and does not yet support non-global cl
## Next steps -- Users in other roles, such as Owner or Contributor, can access not just billing information, but Azure services as well. To manage these roles, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+- Users in other roles, such as Owner or Contributor, can access not just billing information, but Azure services as well. To manage these roles, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
- For more information about roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). ## Need help? Contact us.
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mpa-request-ownership.md
If you need help, [contact support](https://portal.azure.com/?#blade/Microsoft_A
## Next steps * The billing ownership of the Azure subscriptions is transferred to you. Keep track of the charges for these subscriptions in the [Azure portal](https://portal.azure.com).
-* Work with the customer to get access to the transferred Azure subscriptions. [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+* Work with the customer to get access to the transferred Azure subscriptions. [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/understand-ea-roles.md
The following table shows the relationship between the Enterprise Agreement admi
|Account Owner OR Department Admin|Γ£ÿ Disabled |none|No pricing| |None|Not applicable |Owner|Retail pricing|
-You set the Enterprise admin role and view charges policies in the Enterprise portal. The Azure role can be updated in the Azure portal. For more information, see [Manage access using RBAC and the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You set the Enterprise admin role and view charges policies in the Enterprise portal. The Azure role can be updated in the Azure portal. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps - [Manage access to billing information for Azure](manage-billing-access.md)-- [Manage access using RBAC and the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
- [Azure built-in roles](../../role-based-access-control/built-in-roles.md)
cost-management-billing Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/security-baseline.md
Azure Cost Management offers built-in roles, readers and contributors.
What is Azure role-based access control (Azure RBAC) ../role-based-access-control/overview.md -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
data-factory Tutorial Hybrid Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-data-tool.md
Before you begin, if you don't already have an Azure subscription, [create a fre
### Azure roles To create data factory instances, the user account you use to log in to Azure must be assigned a *Contributor* or *Owner* role or must be an *administrator* of the Azure subscription.
-To view the permissions you have in the subscription, go to the Azure portal. Select your user name in the upper-right corner, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on how to add a user to a role, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+To view the permissions you have in the subscription, go to the Azure portal. Select your user name in the upper-right corner, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on how to add a user to a role, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
### SQL Server 2014, 2016, and 2017 In this tutorial, you use a SQL Server database as a *source* data store. The pipeline in the data factory you create in this tutorial copies data from this SQL Server database (source) to Blob storage (sink). You then create a table named **emp** in your SQL Server database and insert a couple of sample entries into the table.
data-factory Tutorial Hybrid Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-portal.md
Before you begin, if you don't already have an Azure subscription, [create a fre
### Azure roles To create data factory instances, the user account you use to sign in to Azure must be assigned a *Contributor* or *Owner* role or must be an *administrator* of the Azure subscription.
-To view the permissions you have in the subscription, go to the Azure portal. In the upper-right corner, select your user name, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on how to add a user to a role, see [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+To view the permissions you have in the subscription, go to the Azure portal. In the upper-right corner, select your user name, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on how to add a user to a role, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
### SQL Server 2014, 2016, and 2017 In this tutorial, you use a SQL Server database as a *source* data store. The pipeline in the data factory you create in this tutorial copies data from this SQL Server database (source) to Blob storage (sink). You then create a table named **emp** in your SQL Server database and insert a couple of sample entries into the table.
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-powershell.md
Before you begin, if you don't already have an Azure subscription, [create a fre
### Azure roles To create data factory instances, the user account you use to sign in to Azure must be assigned a *Contributor* or *Owner* role or must be an *administrator* of the Azure subscription.
-To view the permissions you have in the subscription, go to the Azure portal, select your username at the top-right corner, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on adding a user to a role, see the [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md) article.
+To view the permissions you have in the subscription, go to the Azure portal, select your username at the top-right corner, and then select **Permissions**. If you have access to multiple subscriptions, select the appropriate subscription. For sample instructions on adding a user to a role, see the [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) article.
### SQL Server 2014, 2016, and 2017 In this tutorial, you use a SQL Server database as a *source* data store. The pipeline in the data factory you create in this tutorial copies data from this SQL Server database (source) to Azure Blob storage (sink). You then create a table named **emp** in your SQL Server database, and insert a couple of sample entries into the table.
data-share Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/security-baseline.md
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Azure role-based access control (Azure RBAC) to manage access to data and resources related to Azure Data Share resources, otherwise use service-specific access control methods. -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
**Azure Security Center monitoring**: Yes
defender-for-iot Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
defender-for-iot Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/support-policies.md
+
+ Title: Support policies for Azure Defender for IoT
+
+description: This article describes the support, breaking change policies for Defender for IoT, and the versions of Azure Defender for IoT that are currently available.
+++ Last updated : 2/8/2021++++
+# Versioning and support for Azure Defender for IoT
+
+This article describes the support, breaking change policies for Defender for IoT, and the versions of Azure Defender for IoT that are currently available.
+
+## Servicing information and timelines
+
+Microsoft plans to release updates for Azure defender for IoT no less than once per quarter. Each general availability (GA) version of Azure Defender for IoT Sensor, and Azure Defender for IoT on premises management console is supported for up to nine months after its release. Fixes, and new functionality will be applied to the current GA version that are currently in support, and will not be applied to older GA versions.
+
+## Versions and support dates
+
+| Version | Date released | End support date |
+|--|--|--|
+| 10.0 | 01/2021 | 10/2021 |
+
+## Next steps
+
+See [What's new in Azure Defender for IoT?](release-notes.md)
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-security.md
Azure provides **two Azure built-in roles** for authorizing access to the Azure
| Azure Digital Twins Data Reader | Gives read-only access to Azure Digital Twins resources | d57506d4-4c8d-48b1-8587-93c323f6a5a3 | You can assign roles in two ways:
-* via the access control (IAM) pane for Azure Digital Twins in the Azure portal (see [*Add or remove Azure role assignments using the Azure portal*](../role-based-access-control/role-assignments-portal.md))
+* via the access control (IAM) pane for Azure Digital Twins in the Azure portal (see [*Assign Azure roles using the Azure portal*](../role-based-access-control/role-assignments-portal.md))
* via CLI commands to add or remove a role For more detailed steps on how to do this, try it out in the Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md).
dms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/faq.md
There are several prerequisites required to ensure that Azure Database Migration
Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to: * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* Ensure that your virtual network Network Security Group rules don't block the following communication ports 443, 53, 5671-5672, 9350-9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your virtual network Network Security Group rules don't block the port 443 for ServiceTags of ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration. For a list of all the prerequisites required to compete specific migration scenarios using Azure Database Migration Service, see the related tutorials in the Azure Database Migration Service [documentation](./dms-overview.md) on docs.microsoft.com.
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
In this article, you learn how to:
To complete these steps, you need: * To create a Microsoft Azure Virtual Network for the Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information, see the article [Network topologies for SQL Managed Instance migrations using Azure Database Migration Service]( https://aka.ms/dmsnetworkformi). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* To ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* To ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* To configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access?view=sql-server-2017). * To open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. * If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/pre-reqs.md
Prerequisites associated with using the Azure Database Migration Service are lis
Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to: * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the following communication ports 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your virtual network Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration. * Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). * Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-network-topologies.md
Use this network topology if your environment requires one or more of the follow
| **NAME** | **PORT** | **PROTOCOL** | **SOURCE** | **DESTINATION** | **ACTION** | **Reason for rule** | ||-|--||||--|
-| management | 443,9354 | TCP | Any | Any | Allow | Management plane communication through Service Bus and Azure blob storage. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| Diagnostics | 12000 | TCP | Any | Any | Allow | DMS uses this rule to collect diagnostic information for troubleshooting purposes. |
+| ServiceBus | 443, ServiceTag: ServiceBus | TCP | Any | Any | Allow | Management plane communication through Service Bus. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
+| Storage | 443, ServiceTag: Storage | TCP | Any | Any | Allow | Management plane using Azure blob storage. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
+| Diagnostics | 443, ServiceTag: AzureMonitor | TCP | Any | Any | Allow | DMS uses this rule to collect diagnostic information for troubleshooting purposes. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
| SQL Source server | 1433 (or TCP IP port that SQL Server is listening to) | TCP | Any | On-premises address space | Allow | SQL Server source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) | | SQL Server named instance | 1434 | UDP | Any | On-premises address space | Allow | SQL Server named instance source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) |
-| SMB share | 445 | TCP | Any | On-premises address space | Allow | SMB network share for DMS to store database backup files for migrations to Azure SQL Database MI and SQL Servers on Azure VM <br/>(If you have site-to-site connectivity, you may not need this rule). |
+| SMB share | 445 (if scenario neeeds) | TCP | Any | On-premises address space | Allow | SMB network share for DMS to store database backup files for migrations to Azure SQL Database MI and SQL Servers on Azure VM <br/>(If you have site-to-site connectivity, you may not need this rule). |
| DMS_subnet | Any | Any | Any | DMS_Subnet | Allow | | ## See also
dms Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md) as the target database server to migrate data into. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model. For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL source to allow Azure Database Migration Service to access to the source databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. * Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for PostgreSQL target to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service. * [Enable logical replication](../postgresql/concepts-logical.md) in the Azure DB for PostgreSQL source.
dms Tutorial Mysql Azure Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-online.md
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/mysql/concepts-firewall-rules).
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
* Open your Windows firewall to allow Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration. * Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for MySQL to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
dms Tutorial Oracle Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-oracle-azure-postgresql-online.md
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your virtual network Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). * Open your Windows firewall to allow Azure Database Migration Service to access the source Oracle server, which by default is TCP port 1521. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
+* Ensure that the Network Security Group (NSG) rules for your virtual network don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL Server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration. * Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
+* Ensure that your virtual network Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL Server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration. * Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for Azure Database for PostgreSQL to allow Azure Database Migration Service to access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
dms Tutorial Rds Mysql Server Azure Db For Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-mysql-server-azure-db-for-mysql-online.md
To complete this tutorial, you need to:
* Download and install the [MySQL **Employees** sample database](https://dev.mysql.com/doc/employee/en/employees-installation.html). * Create an instance of [Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall](https://docs.microsoft.com/azure/mysql/concepts-firewall-rules) (or your Linux firewall) to allow for database engine access. For MySQL server, allow port 3306 for connectivity.
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access) (or your Linux firewall) to allow for database engine access. For MySQL server, allow port 3306 for connectivity.
> [!NOTE] > Azure Database for MySQL only supports InnoDB tables. To convert MyISAM tables to InnoDB, please see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) .
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
To complete this tutorial, you need to:
* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/quickstart-create-hyperscale-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, and 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules).
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
* Open your Windows firewall to allow Azure Database Migration Service to access the source PostgreSQL server, which by default is TCP port 5432. * When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration. * Create a server-level [firewall rule](https://docs.microsoft.com/azure/postgresql/concepts-firewall-rules) for the Azure Database for PostgreSQL server to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
dms Tutorial Rds Sql Server Azure Sql And Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-sql-server-azure-sql-and-managed-instance-online.md
To complete this tutorial, you need to:
> > This configuration is necessary because the Azure Database Migration Service lacks internet connectivity.
-* Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). * Open your Windows firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. * For SQL Database, create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) to allow the Azure Database Migration Service access to the target database. Provide the subnet range of the virtual network used for the Azure Database Migration Service.
dms Tutorial Sql Server Azure Sql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-azure-sql-online.md
- Title: "Tutorial: Migrate SQL Server online to SQL single database"-
-description: Learn to perform an online migration from SQL Server to Azure SQL Database by using the Azure Database Migration Service.
--------- Previously updated : 01/21/2020--
-# Tutorial: Migrate SQL Server to a single database or pooled database in Azure SQL Database online using DMS
-
-You can use the Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/) with minimal downtime. In this tutorial, you migrate the **Adventureworks2012** database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using the Azure Database Migration Service.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> - Assess your on-premises database by using the Data Migration Assistant.
-> - Migrate the sample schema by using the Data Migration Assistant.
-> - Create an instance of the Azure Database Migration Service.
-> - Create a migration project by using the Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
-> - Download a migration report.
-
-> [!NOTE]
-> Using the Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier. For more information, see the Azure Database Migration Service [pricing](https://azure.microsoft.com/pricing/details/database-migration/) page.
-
-> [!IMPORTANT]
-> For an optimal migration experience, Microsoft recommends creating an instance of the Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process and introduce errors.
--
-This article describes an online migration from SQL Server to a single database or pooled database in Azure SQL Database. For an offline migration, see [Migrate SQL Server to Azure SQL Database offline using DMS](tutorial-sql-server-to-azure-sql.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2012 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- Create a single (or pooled) database in Azure SQL Database, which you do by following the detail in the article [Create a single database in Azure SQL Database using the Azure portal](../azure-sql/database/single-database-create-quickstart.md).-
- > [!NOTE]
- > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
--- Download and install the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) (DMA) v3.3 or later.-- Create a Microsoft Azure Virtual Network for the Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- > - Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because the Azure Database Migration Service lacks internet connectivity.
--- Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on Azure Virtual Network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your Windows firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that port to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.-- When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.-- Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure SQL Database to allow the Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for the Azure Database Migration Service.-- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.-- Ensure that the credentials used to connect to the target Azure SQL Database instance have CONTROL DATABASE permission on the target Azure SQL Database instances.-- The source SQL Server version must be SQL Server 2005 and above. To determine the version that you SQL Server instance is running, see the article [How to determine the version, edition, and update level of SQL Server and its components](https://support.microsoft.com/help/321185/how-to-determine-the-version-edition-and-update-level-of-sql-server-an).-- Database(s) must be in either Bulk-logged or Full recovery mode. To determine the recovery model configured for your SQL Server instance, see the article [View or Change the Recovery Model of a Database (SQL Server)](/sql/relational-databases/backup-restore/view-or-change-the-recovery-model-of-a-database-sql-server?view=sql-server-2017).-- Make sure to take the Full database backups for the databases. To create a Full database backup, see the article [How to: Create a Full Database Backup (Transact-SQL)](/previous-versions/sql/sql-server-2008-r2/ms191304(v=sql.105)).-- If any of the tables don't have a primary key, enable Change Data Capture (CDC) on the database and specific table(s).
- > [!NOTE]
- > You can use the script below to find any tables that do not have primary keys.
-
- ```sql
- USE <DBName>;
- go
- SELECT is_tracked_by_cdc, name AS TableName
- FROM sys.tables WHERE type = 'U' and is_ms_shipped = 0 AND
- OBJECTPROPERTY(OBJECT_ID, 'TableHasPrimaryKey') = 0;
- ```
-
- If the results show one or more tables with 'is_tracked_by_cdc' as '0', enable change capture for the database and for the specific tables by using the process described in the article [Enable and Disable Change Data Capture (SQL Server)](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?view=sql-server-2017).
--- Configure the distributor role for source SQL Server.-
- >[!NOTE]
- > You can determine if replication components are installed by using the query below.
-
- ```sql
- USE master;
- DECLARE @installed int;
- EXEC @installed = sys.sp_MS_replication_installed;
- SELECT @installed as installed;
- ```
-
- If the result returns an error message suggesting to install replication components, install SQL Server replication components by using the process in the article [Install SQL Server replication](/sql/database-engine/install-windows/install-sql-server-replication?view=sql-server-2017).
-
- If the replication is already installed, check if the distribution role is configured on the source SQL Server using the T-SQL command below.
-
- ```sql
- EXEC sp_get_distributor;
- ```
-
- If the distribution isn't set up, where the distribution server shows NULL for above command output, configure the distribution using the guidance provided in the article [Configure Distribution](/sql/relational-databases/replication/configure-publishing-and-distribution?view=sql-server-2017).
--- Disable database triggers on the target Azure SQL Database.
- >[!NOTE]
- > You can find the database triggers on the target Azure SQL Database by using the following query:
-
- ```sql
- Use <Database name>
- select * from sys.triggers
- DISABLE TRIGGER (Transact-SQL)
- ```
-
- For more information, see the article [DISABLE TRIGGER (Transact-SQL)](/sql/t-sql/statements/disable-trigger-transact-sql?view=sql-server-2017).
-
-## Assess your on-premises database
-
-Before you can migrate data from a SQL Server instance to Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant v3.3 or later, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment.
-
-To assess an on-premises database, perform the following steps:
-
-1. In DMA, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
-
- When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
-
- - Check database compatibility
- - Check feature parity
-
- Both report types are selected by default.
-
-3. In DMA, on the **Options** screen, select **Next**.
-4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **AdventureWorks2012**, select **Add**, and then select **Start Assessment**.
-
- > [!NOTE]
- > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- When the assessment is complete, the results display as shown in the following graphic:
-
- ![Assess data migration](media/tutorial-sql-server-to-azure-sql-online/dma-assessments.png)
-
- For single databases or pooled databases in Azure SQL Database, the assessments identify feature parity issues and migration blocking issues for deploying to a single database or pooled database.
-
- - The **SQL Server feature parity** category provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to help you plan the effort into your migration projects.
- - The **Compatibility issues** category identifies partially supported or unsupported features that reflect compatibility issues that might block migrating SQL Server database(s) to Azure SQL Database. Recommendations are also provided to help you address those issues.
-
-6. Review the assessment results for migration blocking issues and feature parity issues by selecting the specific options.
-
-## Migrate the sample schema
-
-After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database.
-
-> [!NOTE]
-> Before you create a migration project in DMA, be sure that you have already provisioned a SQL database in Azure as mentioned in the prerequisites. For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.
-
-> [!IMPORTANT]
-> If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-To migrate the **AdventureWorks2012** schema to a single database or pooled database Azure SQL Database, perform the following steps:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
-3. Under **Migration Scope**, select **Schema only**.
-
- After performing the previous steps, the DMA interface should appear as shown in the following graphic:
-
- ![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql-online/dma-create-project.png)
-
-4. Select **Create** to create the project.
-5. In DMA, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2012** database.
-
- ![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql-online/dma-source-connect.png)
-
-6. Select **Next**, under **Connect to target server**, specify the target connection details for the Azure SQL database, select **Connect**, and then select the **AdventureWorksAzure** database you had pre-provisioned in Azure SQL Database.
-
- ![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql-online/dma-target-connect.png)
-
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2012** database that need to be deployed to Azure SQL Database.
-
- By default, all objects are selected.
-
- ![Generate SQL Scripts](media/tutorial-sql-server-to-azure-sql-online/dma-assessment-source.png)
-
-8. Select **Generate SQL script** to create the SQL scripts, and then review the scripts for any errors.
-
- ![Schema Script](media/tutorial-sql-server-to-azure-sql-online/dma-schema-script.png)
-
-9. Select **Deploy schema** to deploy the schema to Azure SQL Database, and then after the schema is deployed, check the target server for any anomalies.
-
- ![Deploy Schema](media/tutorial-sql-server-to-azure-sql-online/dma-schema-deploy.png)
-
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-sql-server-to-azure-sql-online/portal-select-subscription1.png)
-
-2. Select the subscription in which you want to create the instance of the Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-sql-server-to-azure-sql-online/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-sql-server-to-azure-sql-online/portal-register-resource-provider.png)
-
-## Create an instance
-
-1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
-
- ![Azure Marketplace](media/tutorial-sql-server-to-azure-sql-online/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-sql-server-to-azure-sql-online/dms-create1.png)
-
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
-
-4. Select the location in which you want to create the instance of the Azure Database Migration Service.
-
-5. Select an existing virtual network or create a new one.
-
- The virtual network provides the Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
-
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
-6. Select a pricing tier.
-
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance settings](media/tutorial-sql-server-to-azure-sql-online/dms-settings2.png)
-
-7. Select **Create** to create the service.
-
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
-
- ![Locate all instances of the Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql-online/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, search for the name of the Azure Database Migration Service instance that you created, and then select the instance.
-
- ![Locate your instance of the Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql-online/dms-instance-search.png)
-
-3. Select + **New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**.
-5. In the **Choose type of activity** section, select **Online data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-azure-sql-online/dms-create-project3.png)
-
- > [!NOTE]
- > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
-
-6. Select **Save**.
-
-7. Select **Create and run activity** to create the project and run the migration activity.
-
- ![Create and Run Database Migration Service Activity](media/tutorial-sql-server-to-azure-sql-online/dms-create-and-run-activity.png)
-
-## Specify source details
-
-1. On the **Migration source detail** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you haven't installed a trusted certificate on your source server, select the **Trust server certificate** check box.
-
- When a trusted certificate isn't installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- ![Source Details](media/tutorial-sql-server-to-azure-sql-online/dms-source-details3.png)
-
- > [!IMPORTANT]
- > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-## Specify target details
-
-1. Select **Save**, and then on the **Migration target details** screen, specify the connection details for the target Azure SQL Database, which is the pre-provisioned Azure SQL Database to which the **AdventureWorks2012** schema was deployed by using the DMA.
-
- ![Select Target](media/tutorial-sql-server-to-azure-sql-online/dms-select-target3.png)
-
-2. Select **Save**, and then on the **Map to target databases** screen, map the source and the target database for migration.
-
- If the target database contains the same database name as the source database, the Azure Database Migration Service selects the target database by default.
-
- ![Map to target databases](media/tutorial-sql-server-to-azure-sql-online/dms-map-targets-activity3.png)
-
-3. Select **Save**, on the **Select tables** screen, expand the table listing, and then review the list of affected fields.
-
- The Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade.
-
- ![Select tables](media/tutorial-sql-server-to-azure-sql-online/dms-configure-setting-activity3.png)
-
-4. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
-
- ![Migration Summary](media/tutorial-sql-server-to-azure-sql-online/dms-migration-summary.png)
-
-## Run the migration
--- Select **Run migration**.-
- The migration activity window appears, and the **Status** of the activity is **Initializing**.
-
- ![Activity Status - initializing](media/tutorial-sql-server-to-azure-sql-online/dms-activity-status2.png)
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Running**.
-
-2. Click on a specific database to get to the migration status for **Full data load** and **Incremental data sync** operations.
-
- ![Activity Status - in progress](media/tutorial-sql-server-to-azure-sql-online/dms-activity-in-progress.png)
-
-## Perform migration cutover
-
-After the initial Full load is completed, the databases are marked **Ready to cutover**.
-
-1. When you're ready to complete the database migration, select **Start Cutover**.
-
- ![Start cutover](media/tutorial-sql-server-to-azure-sql-online/dms-start-cutover.png)
-
-2. Make sure to stop all the incoming transactions to the source database; wait until the **Pending changes** counter shows **0**.
-3. Select **Confirm**, and the select **Apply**.
-4. When the database migration status shows **Completed**, connect your applications to the new target Azure SQL Database.
-
- ![Activity Status - completed](media/tutorial-sql-server-to-azure-sql-online/dms-activity-completed.png)
-
-## Next steps
--- For information about known issues and limitations when performing online migrations to Azure SQL Database, see the article [Known issues and workarounds with Azure SQL Database online migrations](known-issues-azure-sql-online.md).-- For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).-- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](../azure-sql/database/sql-database-paas-overview.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
> * Choose to allow all network to access the storage account. > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
-* Ensure that your virtual network Network Security Group rules don't block the following outbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
* Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). * Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. * If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
To complete this tutorial, you need to:
> >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md). -- Ensure that your virtual network Network Security Group outbound security rules don't block the following communication ports required for the Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). - Open your Windows firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. - If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
> > This configuration is necessary because Azure Database Migration Service lacks internet connectivity. -- Ensure that your virtual network Network Security Group rules don't block the following inbound communication ports to Azure Database Migration Service: 443, 53, 9354, 445, 12000. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
- Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). - Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall. - If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
firewall-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
**Azure Security Center monitoring**: Not applicable
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-certificates.md
Ensure your CA certificate complies with the following requirements:
- It must be a single certificate, and shouldnΓÇÖt include the entire chain of certificates. -- It must be valid for one year forward.
+- It must be valid for at least one year forward.
- It must be an RSA private key with minimal size of 4096 bytes.
frontdoor Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/security-baseline.md
Use built-in roles to allocate permission and only create custom roles based on
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
frontdoor Concept Caching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-caching.md
+
+ Title: 'Azure Front Door: Caching'
+description: This article helps you understand the behavior of Azure Front Door Standard/Premium with routing rules that have enabled caching.
+++++ Last updated : 02/18/2021+++
+# Caching with Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+In this article, you'll learn how Front Door Standard/Premium (Preview) Routes and Rule set behaves when you have caching enabled. Azure Front Door is a modern Content Delivery Network (CDN) with dynamic site acceleration and load balancing.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Delivery of large files
+
+Front Door Standard/Premium (Preview) delivers large files without a cap on file size. Front Door uses a technique called object chunking. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full or byte-range file request, the Front Door environment requests the file from the originS in chunks of 8 MB.
+
+After the chunk arrives at the Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection.
+
+For more information on the byte-range request, read [RFC 7233](https://web.archive.org/web/20171009165003/http://www.rfc-base.org/rfc-7233.html).
+Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Ensuing requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the backend. This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, this optimization isn't effective.
+
+## File compression
+
+Refer to improve performance by compressing files in Azure Front Door.
+
+## Query string behavior
+
+With Front Door, you can control how files are cached for a web request that contains a query string. In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, `http://www.contoso.com/content.mov?field1=value1&field2=value2`. If there's more than one key-value pair in a query string of a request then their order doesn't matter.
+
+* **Ignore query strings**: In this mode, Front Door passes the query strings from the requestor to the origin on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+
+* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
+* You can also use Rule Set to specify **cache key query string** behavior, to include, or exclude specified parameters when cache key gets generated. For example, the default cache key is: /foo/image/asset.html, and the sample request is `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. There's a rule set rule to exclude query string 'userid'. Then the query string cache-key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+
+## Cache purge
+
+Refer to cache purge.
+
+## Cache expiration
+The following order of headers is used to determine how long an item will be stored in our cache:</br>
+1. Cache-Control: s-maxage=\<seconds>
+2. Cache-Control: max-age=\<seconds>
+3. Expires: \<http-date>
+
+Cache-Control response headers that indicate that the response won't be cached such as Cache-Control: private, Cache-Control: no-cache, and Cache-Control: no-store are honored. If no Cache-Control is present, the default behavior is that Front Door will cache the resource for X amount of time. Where X gets randomly picked between 1 to 3 days.
+
+## Request headers
+
+The following request headers won't be forwarded to an origin when using caching.
+* Content-Length
+* Transfer-Encoding
+
+## Cache duration
+
+Cache duration can be configured in Rule Set. The cache duration set via Rules Set is a true cache override. Which means that it will use the override value no matter what the origin response header is.
+
+## Next steps
+
+* Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md)
+* Learn more about [Rule Set Actions](concept-rule-set-actions.md)
frontdoor Concept Ddos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-ddos.md
+
+ Title: 'Azure Front Door: DDoS protection'
+description: This page provides information about how Azure Front Door Standard/Premium helps to protect against DDoS attacks
+
+documentationcenter: ''
+++ Last updated : 02/18/2021+++
+# DDoS protection on Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door has several features and characteristics that can help to prevent distributed denial of service (DDoS) attacks. These features can prevent attackers from reaching your application and affecting your application's availability and performance.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Integration with Azure DDoS Protection Basic
+
+Front Door is protected by Azure DDoS Protection Basic. The feature is integrated into the Front Door platform by default and at no extra cost. The full scale and capacity of Front Door's globally deployed network provides defense against common network layer attacks through always-on traffic monitoring and real-time mitigation. Basic DDoS protection also defends against the most common, frequently occurring layer 7 DNS query floods and layer 3 and 4 volumetric attacks that target public endpoints. This service also has a proven track record in protecting Microsoft's enterprise and consumer services from large-scale attacks. For more information, see [Azure DDoS Protection](../../security/fundamentals/ddos-best-practices.md).
+
+## Protocol blocking
+
+Front Door only accepts traffic on the HTTP and HTTPS protocols, and will only process valid requests with a known `Host` header. This behavior helps to mitigate some common DDoS attack types including volumetric attacks that get spread across a range of protocols and ports, DNS amplification attacks, and TCP poisoning attacks.
+
+## Capacity absorption
+
+Front Door is a massively scaled, globally distributed service. We have many customers, including Microsoft's own large-scale cloud products, that receive hundreds of thousands of requests each second. Front Door is located at the edge of Azure's network, absorbing, and geographically isolating large volume attacks. This can prevent malicious traffic from going any further than the edge of the Azure network.
+
+## Caching
+
+[Front Door's caching capabilities](concept-caching.md) can be used to protect backends from large traffic volumes generated by an attack. Cached resources will be returned from the Front Door edge nodes so they don't get forwarded to your backend. Even short cache expiry times (seconds or minutes) on dynamic responses can greatly reduce load on backend services. For more information about caching concepts and patterns, see [Caching considerations](/azure/architecture/best-practices/caching) and [Cache-aside pattern](/azure/architecture/patterns/cache-aside).
+
+## Web Application Firewall (WAF)
+
+[Front Door's Web Application Firewall (WAF)](../../web-application-firewall/afds/afds-overview.md) can be used to mitigate many different types of attacks:
+
+* Using the managed rule set provides protection against many common attacks.
+* Traffic from outside a defined geographic region, or within a defined region, can be blocked or redirected to a static webpage. For more information, see [Geo-filtering](../../web-application-firewall/afds/waf-front-door-geo-filtering.md).
+* IP addresses and ranges that you identify as malicious can be blocked.
+* Rate limiting can be applied to prevent IP addresses from calling your service too frequently.
+* You can create [custom WAF rules](../../web-application-firewall/afds/waf-front-door-custom-rules.md) to automatically block and rate limit HTTP or HTTPS attacks that have known signatures.
+
+## For further protection
+
+If you require further protection, then you can enable [Azure DDoS Protection Standard](../../security/fundamentals/ddos-best-practices.md#ddos-protection-standard) on the VNet where your back-ends are deployed. DDoS Protection Standard customers receive more benefits including:
+
+* Cost protection
+* SLA guarantee
+* Access to experts from the DDoS Rapid Response Team for immediate help during an attack.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor Concept Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-endpoint-manager.md
+
+ Title: 'Azure Front Door: Endpoint Manager'
+description: This article provides an overview of Azure Front Door Endpoint Manager.
+++++ Last updated : 02/18/2021+++
+# What is Azure Front Door Standard/Premium (Preview) Endpoint Manager?
+
+> [!NOTE]
+> * This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+
+Endpoint Manager provides an overview of endpoints you've configured for your Azure Front Door. An endpoint is a logical grouping of a domains and their associated configurations. Endpoint Manager helps you manage your collection of endpoints for CRUD (create, read, update, and delete) operation. You can manage the following elements for your endpoints through Endpoint
+
+* Domains
+* Origin Groups
+* Routes
+* Security
++
+Endpoint Manager list how many instances of each element are created within an endpoint. The association status for each element will also be displayed. For example, you may create multiple domains and origin groups, and assign the association between them with different routes.
+
+> [!IMPORTANT]
+> * Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Linked view
+
+With the linked view within Endpoint Manager, you could easily identify the association between your Azure Front Door elements, such as:
+
+* Which domains are associated to the current endpoint?
+* Which origin group is associated to which domain?
+* Which WAF policy is associated to which domain?
++
+## Next Steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor Concept Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-health-probes.md
+
+ Title: 'Azure Front Door Standard/Premium (Preview) Health probe monitoring'
+description: This article helps you understand how Azure Front Door monitors the health of your backend.
+++++ Last updated : 02/18/2021+++
+# Azure Front Door Standard/Premium (Preview) Health probe monitoring
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door periodically sends an HTTP or HTTPS request to each of your backend. These requests allow Azure Front Door to determine the health of each endpoint in the backend pool. Front Door then uses these responses from the probe to determine the "best" backend resources to route your client requests.
+
+> [!WARNING]
+> Since Front Door has many edge environments globally, health probe volume for your backends can be quite high - ranging from 25 requests every minute to as high as 1200 requests per minute, depending on the health probe frequency configured. With the default probe frequency of 30 seconds, the probe volume on your backend should be about 200 requests per minute.
+
+## Supported protocols
+
+Front Door supports sending probes over either HTTP or HTTPS protocols.ΓÇï These probes are sent over the same TCP ports configured for routing client requests, and cannot be overridden.
+
+## Supported HTTP methods for health probes
+
+Front Door supports the following HTTP methods for sending the health probes:
+
+* **GET:** The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
+* **HEAD:** The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. For new Front Door profiles, by default, the probe method is set as HEAD.
+
+> [!NOTE]
+> For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
+
+## Health probe responses
+
+| Responses | Description |
+| - | - |
+| Determining Health | ΓÇïA 200 OK status code indicates the backend is healthy. Everything else is considered a failure. If for any reason (including network failure) a valid HTTP response isn't received for a probe, the probe is counted as a failure.|
+| Measuring Latency | Latency is the wall-clock time measured from the moment immediately before we send the probe request to the moment when we receive the last byte of the response. We use a new TCP connection for each request, so this measurement isn't biased towards backends with existing warm connections. |
+
+## How Front Door determines backend health
+
+Azure Front Door uses the same three-step process below across all algorithms to determine health.
+
+1. Exclude disabled backends.
+
+1. Exclude backends that have health probes errors:
+
+ * This selection is done by looking at the last _n_ health probe responses. If at least _x_ are healthy, the backend is considered healthy.
+
+ * _n_ is configured by changing the SampleSize property in load-balancing settings.
+
+ * _x_ is configured by changing the SuccessfulSamplesRequired property in load-balancing settings.
+
+1. For the sets of healthy backends in the backend pool, Front Door additionally measures and maintains the latency (round-trip time) for each backend.
++
+## Complete health probe failure
+
+If health probes fail for every backend in a backend pool, then Front Door considers all backends healthy and routes traffic in a round robin distribution across all of them.
+
+Once any backend returns to a healthy state, then Front Door will resume the normal load-balancing algorithm.
+
+## Disabling health probes
+
+If you have a single backend in your backend pool or only one backend is active in a backend pool, then you can choose to disable the health probes. By doing so, you'll reduce the load on your application backend.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor Concept Origin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-origin.md
+
+ Title: Origin and Origin group in Azure Front Door Standard/Premium
+description: This article describes what origin and origin group are in an Azure Front Door configuration.
++++ Last updated : 02/18/2021+++
+# Origin and Origin group in Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article will cover concepts about how your web application deployment works with Azure Front Door Standard/Premium. You'll also learn about what an *origin* and *origin group* is in the Azure Front Door Standard/Premium configuration.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Origin
+
+Azure Front Door Standard/Premium origin refers to the host name or public IP of your application that serves your client requests. Azure Front Door Standard/Premium supports both Azure and non-Azure resources in an origin group. The application can also be hosted in your on-premises datacenter or with another cloud provider. Origin shouldn't be confused with your database tier or storage tier. Origin should be viewed as the public endpoint for your application backend. When you add an origin to an Azure Front Door Standard/Premium origin group, you must also add the following information:
+
+* **Origin type:** The type of resource you want to add. Front Door supports autodiscovery of your application backends from App Service, Cloud Service, or Storage. If you want a different resource in Azure or even a non-Azure backend, select **Custom host**.
+
+ >[!IMPORTANT]
+ >During configuration, APIs doesn't validate if the origin is not accessible from the Front Door environment. Make sure that Front Door can reach your origin.
+
+* **Subscription and Origin host name:** If you didn't select **Custom host** for your backend host type, select your backend by choosing the appropriate subscription and the corresponding backend host name.
+
+* **Origin host header:** The host header value sent to the backend for each request. For more information, see [Origin host header](#hostheader).
+
+* **Priority:** Assign priorities to your different origin when you want to use a primary origin for all traffic. Also, provide backups if the primary or the backup origins are unavailable. For more information, see [Priority](#priority).
+
+* **Weight:** Assign weights to your different origin to distribute traffic across a set of origins, either evenly or according to weight coefficients. For more information, see [Weights](#weighted).
+
+### <a name = "hostheader"></a>Origin host header
+
+Requests that are forwarded by Azure Front Door Standard/Premium to an origin will include a host header field that the origin uses to retrieve the targeted resource. The value for this field typically comes from the origin URI that has the host header and port.
+
+For example, a request made for `www.contoso.com` will have the host header `www.contoso.com`. If you use Azure portal to configure your origin, the default value for this field is the host name of the backend. If your origin is `contoso-westus.azurewebsites.net`, in the Azure portal, the autopopulated value for the origin host header will be `contoso-westus.azurewebsites.net`. However, if you use Azure Resource Manager templates or another method without explicitly setting this field, Front Door will send the incoming host name as the value for the host header. If the request was made for `www.contoso.com`, and your origin is `contoso-westus.azurewebsites.net` that has an empty header field, Front Door will set the host header as `www.contoso.com`.
+
+Most app backends (Azure Web Apps, Blob storage, and Cloud Services) require the host header to match the domain of the backend. However, the frontend host that routes to your backend will use a different hostname such as `www.contoso.net`.
+
+If your origin requires the host header to match the backend hostname, make sure that the backend host header includes the hostname of the backend.
+
+#### Configuring the origin host header for the origin
+
+To configure the **origin host header** field for an origin in the origin group section:
+
+1. Open your Front Door resource and select the origin group with the origin to configure.
+
+2. Add an origin if you haven't done so, or edit an existing one.
+
+3. Set the origin host header field to a custom value or leave it blank. The hostname for the incoming request will be used as the host header value.
+
+## Origin group
+
+An origin group in Azure Front Door Standard/Premium refers to the set of origins that receives similar traffic for their application. In other words, it's a logical grouping of your application instances across the world that receive the same traffic and respond with an expected behavior. These origins can be deployed across different regions or within the same region. All origins can be in Active/Active deployment mode or what is defined as Active/Passive configuration.
+
+An origin group defines how origins should be evaluated via health probes. It also defines how load balancing occurs between them.
+
+### Health probes
+
+Azure Front Door Standard/Premium sends periodic HTTP/HTTPS probe requests to each of your configured origins. Probe requests determine the proximity and health of each origin to load balance your end-user requests. Health probe settings for an origin group define how we poll the health status of app backends. The following settings are available for load-balancing configuration:
+
+* **Path**: The URL used for probe requests for all the origins in the origin group. For example, if one of your origins is `contoso-westus.azurewebsites.net` and the path gets set to /probe/test.aspx, then Front Door environments, assuming the protocol is HTTP, will send health probe requests to `http://contoso-westus.azurewebsites.net/probe/test.aspx`.
+
+* **Protocol**: Defines whether to send the health probe requests from Front Door to your origins with HTTP or HTTPS protocol.
+
+* **Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
+ > [!NOTE]
+ > For lower load and cost on your backends, Front Door recommends using HEAD requests for health probes.
+
+* **Interval (seconds)**: Defines the frequency of health probes to your origins, or the intervals in which each of the Front Door environments sends a probe.
+
+ >[!NOTE]
+ >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your backends receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
+
+For more information, see [Health probes](concept-health-probes.md).
+
+### Load-balancing settings
+
+Load-balancing settings for the origin group define how we evaluate health probes. These settings determine if the origin is healthy or unhealthy. They also check how to load-balance traffic between different origins in the origin group. The following settings are available for load-balancing configuration:
+
+* **Sample size:** Identifies how many samples of health probes we need to consider for origin health evaluation.
+
+* **Successful sample size:** Defines the sample size as previously mentioned, the number of successful samples needed to call the origin healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your origin, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the origin as healthy.
+
+* **Latency sensitivity (extra latency):** Defines whether you want Azure Front Door Standard/Premium to send the request to the origin within the latency measurement sensitivity range or forward the request to the closest backend.
+
+For more information, see [Least latency based routing method](#latency).
+
+## Routing methods
+
+Azure Front Door Standard/Premium supports different kinds of traffic-routing methods to determine how to route your HTTP/HTTPS traffic to different service endpoints. When your client requests reaching Front Door, the configured routing method gets applied to ensure the requests are forwarded to the best backend instance.
+
+There are four traffic routing methods available in Azure Front Door Standard/Premium:
+
+* **[Latency](#latency):** The latency-based routing ensures that requests are sent to the lowest latency backends acceptable within a sensitivity range. Basically, your user requests are sent to the "closest" set of backends in respect to network latency.
+* **[Priority](#priority):** You can assign priorities to your backends when you want to configure a primary backend to service all traffic. The secondary backend can be a backup in case the primary backend becomes unavailable.
+* **[Weighted](#weighted):** You can assign weights to your backends when you want to distribute traffic across a set of backends. Whether you want to evenly distribute or according to the weight coefficients.
+
+All Azure Front Door Standard/Premium configurations include monitoring of backend health and automated instant global failover. For more information, see [Backend Monitoring](concept-health-probes.md). Your Front Door can work based off of a single routing method. But depending on your application needs, you can also combine multiple routing methods to build an optimal routing topology.
+
+### <a name = "latency"></a>Lowest latencies based traffic-routing
+
+Deploying backends in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. The default traffic-routing method for your Front Door configuration forwards requests from your end users to the closest backend of the Front Door environment that received the request. Combined with the Anycast architecture of Azure Front Door, this approach ensures that each of your end users get maximum performance personalized based on their location.
+
+The 'closest' backend isn't necessarily closest as measured by geographic distance. Instead, Front Door determines the closest backends by measuring network latency.
+
+Below is the overall decision flow:
+
+| Available backends | Priority | Latency signal (based on health probe) | Weights |
+|-| -- | -- | -- |
+| First, select all backends that are enabled and returned healthy (200 OK) for the health probe. If there are six backends A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available backends is A, B, D, and F. | Next, the top priority backends among the available ones are selected. If backend A, B, and D have priority 1 and backend F has a priority of 2. Then, the selected backends will be A, B, and D.| Select the backends with latency range (least latency & latency sensitivity in ms specified). If backend A is 15 ms, B is 30 ms and D is 60 ms away from the Front Door environment where the request landed, and latency sensitivity is 30 ms, then the lowest latency pool consist of backend A and B, because D is beyond 30 ms away from the closest backend that is A. | Lastly, Front Door will round robin the traffic among the final selected pool of backends in the ratio of weights specified. Say, if backend A has a weight of 5 and backend B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among backends A and B. |
+
+>[!NOTE]
+> By default, the latency sensitivity property is set to 0 ms, that is, always forward the request to the fastest available backend.
+
+### <a name = "priority"></a>Priority-based traffic-routing
+
+Often an organization wants to provide high availability for their services by deploying more than one backup service in case the primary one goes down. Across the industry, this topology is also referred to as Active/Standby or Active/Passive deployment topology. The 'Priority' traffic-routing method allows Azure customers to easily implement this failover pattern.
+
+Your default Front Door contains an equal priority list of backends. By default, Front Door sends traffic only to the top priority backends (lowest value for priority) that is, the primary set of backends. If the primary backends aren't available, Front Door routes the traffic to the secondary set of backends (second lowest value for priority). If both the primary and secondary backends aren't available, the traffic goes to the third, and so on. Availability of the backend is based on the configured status (enabled or disabled) and the ongoing backend health status as determined by the health probes.
+
+#### Configuring priority for backends
+
+Each backend in your backend pool of the Front Door configuration has a property called 'Priority', which can be a number between 1 and 5. With Azure Front Door, you configure the backend priority explicitly using this property for each backend. This property is a value between 1 and 5. Lower values represent a higher priority. Backends can share priority values.
+
+### <a name = "weighted"></a>Weighted traffic-routing method
+The 'Weighted' traffic-routing method allows you to distribute traffic evenly or to use a pre-defined weighting.
+
+In the Weighted traffic-routing method, you assign a weight to each backend in the Front Door configuration of your backend pool. The weight is an integer from 1 to 1000. This parameter uses a default weight of '50'.
+
+With the list of available backends that have an acceptable latency sensitivity, the traffic gets distributed with a round-robin mechanism using the ratio of weights specified. If the latency sensitivity gets set to 0 milliseconds, then this property doesn't take effect unless there are two backends with the same network latency.
+
+The weighted method enables some useful scenarios:
+
+* **Gradual application upgrade**: Gives a percentage of traffic to route to a new backend, and gradually increase the traffic over time to bring it at par with other backends.
+* **Application migration to Azure**: Create a backend pool with both Azure and external backends. Adjust the weight of the backends to prefer the new backends. You can gradually set this up starting with having the new backends disabled, then assigning them the lowest weights, slowly increasing it to levels where they take most traffic. Then finally disabling the less preferred backends and removing them from the pool.
+* **Cloud-bursting for additional capacity**: Quickly expand an on-premises deployment into the cloud by putting it behind Front Door. When you need extra capacity in the cloud, you can add or enable more backends and specify what portion of traffic goes to each backend.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md)
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-private-link.md
+
+ Title: 'Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)'
+description: This page provides information about how to secure connectivity to your origin using Private Link.
+
+documentationcenter: ''
+++ Last updated : 02/18/2021++++
+# Secure your Origin with Private Link in Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+
+## Overview
+
+[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure PaaS Services and Azure hosted services over a Private Endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Front Door Premium SKU can connect to your origin using the Private Link service. Your applications can be hosted in your private virtual network or behind a PaaS service, not accessible from public Internet.
++
+When you enable Private Link to your origin in Azure Front Door Premium configuration, Front Door creates a private endpoint on your behalf from Front Door's regional private network. This endpoint is managed by Azure Front Door. You'll receive an Azure Front Door private endpoint request for approval message at your origin. After you approve the request, a private IP address gets assigned from Front Door's virtual network, traffic between Azure Front Door and your origin traverses the established private link with Azure network backbone. Incoming traffic to your origin is now secured when coming from your Azure Front Door.
++
+Azure Front Door Premium supports various origin types. If your origin is hosted on a set of virtual machines in your private network, you need to first create an internal standard load balancer, enable private link service to the standard load balancer, and then select Custom origin type. For private link configuration, select "Microsoft.Network/PrivateLinkServices as resource Type. For PaaS services such as Azure Web App and Storage Account, you can enable Private Link Service from the corresponding services first and select Microsoft.Web/Sites for Web App and Microsoft.Storage/StorageAccounts for storage account private link service types.
+
+## Limitations
+
+Azure Front Door private endpoints are available in the following regions during public preview: East US, West 2 US, and South Central US.
+
+For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint.
+
+Azure Front Door private endpoints get managed by the platform and under the subscription of Azure Front Door. Azure Front Door allows private link connections to the same customer subscription that is used to create the Front Door profile.
+
+## Next steps
+
+* To connect Azure Front Door Premium to Virtual Machines using Private Link service, see [Create a Private Endpoint](../../private-link/create-private-endpoint-portal.md).
+* To connect Azure Front Door Premium to your Web App via Private Link service, see [Connect to a web app using a Private endpoint](../../private-link/tutorial-private-endpoint-webapp-portal.md).
+* To connect Azure Front Door Premium to your Storage Account via private link service, see [Connect to a storage account using Private endpoint](../../private-link/tutorial-private-endpoint-storage-portal.md).
frontdoor Concept Route https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-route.md
+
+ Title: What is Azure Front Door Standard/Premium Route?
+description: This article helps you understand how Azure Front Door Standard/Premium matches which routing rule to use for an incoming request.
+++++ Last updated : 02/18/2021+++
+# What is Azure Front Door Standard/Premium (Preview) Route?
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door Standard/Premium Route defines how the traffic is handled when the incoming request arrives at the Azure Front Door environment. Through the Route settings, an association is defined between a domain and a backend origin group. By turning on the advance features such as Pattern to Mach, Rule set, more granular control over the traffic is achievable.
+
+A Front Door Standard/Premium routing configuration is composed of two major parts: "left-hand side" and "right-hand side". We match the incoming request to the left-hand side of the route and the right-hand side defines how we process the request.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+### Incoming match (left-hand side)
+
+The following properties determine whether the incoming request matches the routing rule (or left-hand side):
+
+* **HTTP Protocols** (HTTP/HTTPS)
+* **Hosts** (for example, www\.foo.com, \*.bar.com)
+* **Paths** (for example, /\*, /users/\*, /file.gif)
+
+These properties are expanded out internally so that every combination of Protocol/Host/Path is a potential match set.
+
+### Route data (right-hand side)
+
+The decision of how to process the request, depends on whether caching is enabled or not for the Route. If a cached response isn't available, then the request is forwarded to the appropriate backend.
+
+## Route matching
+
+This section will focus on how we match to a given Front Door routing rule. The basic concept is that we always match to the **most-specific match first** looking only at the "left-hand side". We first match based on HTTP protocol, then Frontend host, then the Path.
+
+### Frontend host matching
+
+When matching Frontend hosts, we use the logic defined below:
+
+1. Look for any routing with an exact match on the host.
+2. If no exact frontend hosts match, reject the request and send a 400 Bad Request error.
+
+To explain this process further, let's look at an example configuration of Front Door routes (left-hand side only):
+
+| Routing rule | Frontend hosts | Path |
+|-|--|-|
+| A | foo.contoso.com | /\* |
+| B | foo.contoso.com | /users/\* |
+| C | www\.fabrikam.com, foo.adventure-works.com | /\*, /images/\* |
+
+If the following incoming requests were sent to Front Door, they would match against the following routing rules from above:
+
+| Incoming frontend host | Matched routing rule(s) |
+|||
+| foo.contoso.com | A, B |
+| www\.fabrikam.com | C |
+| images.fabrikam.com | Error 400: Bad Request |
+| foo.adventure-works.com | C |
+| contoso.com | Error 400: Bad Request |
+| www\.adventure-works.com | Error 400: Bad Request |
+| www\.northwindtraders.com | Error 400: Bad Request |
+
+### Path matching
+
+After Azure Front Door Standard/Premium determines the specific frontend host and filtering possible routing rules to just the routes with that frontend host. Front Door then filters the routing rules based on the request path. We use a similar logic as frontend hosts:
+
+1. Look for any routing rule with an exact match on the Path
+2. If no exact match Paths, look for routing rules with a wildcard Path that matches
+3. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response.
+
+>[!NOTE]
+> Any Paths without a wildcard are considered to be exact-match Paths. Even if the Path ends in a slash, it's still considered exact match.
+
+To explain further, let's look at another set of examples:
+
+| Routing rule | Frontend host | Path |
+|-||-|
+| A | www\.contoso.com | / |
+| B | www\.contoso.com | /\* |
+| C | www\.contoso.com | /ab |
+| D | www\.contoso.com | /abc |
+| E | www\.contoso.com | /abc/ |
+| F | www\.contoso.com | /abc/\* |
+| G | www\.contoso.com | /abc/def |
+| H | www\.contoso.com | /path/ |
+
+Given that configuration, the following example matching table would result:
+
+| Incoming Request | Matched Route |
+|||
+| www\.contoso.com/ | A |
+| www\.contoso.com/a | B |
+| www\.contoso.com/ab | C |
+| www\.contoso.com/abc | D |
+| www\.contoso.com/abzzz | B |
+| www\.contoso.com/abc/ | E |
+| www\.contoso.com/abc/d | F |
+| www\.contoso.com/abc/def | G |
+| www\.contoso.com/abc/defzzz | F |
+| www\.contoso.com/abc/def/ghi | F |
+| www\.contoso.com/path | B |
+| www\.contoso.com/path/ | H |
+| www\.contoso.com/path/zzz | B |
+
+>[!WARNING]
+> </br> If there are no routing rules for an exact-match frontend host with a catch-all route Path (`/*`), then there will not be a match to any routing rule.
+>
+> Example configuration:
+>
+> | Route | Host | Path |
+> |-|||
+> | A | profile.contoso.com | /api/\* |
+>
+> Matching table:
+>
+> | Incoming request | Matched Route |
+> |||
+> | profile.domain.com/other | None. Error 400: Bad Request |
+
+### Routing decision
+
+Once Azure Front Door Standard/Premium has matched to a single routing rule, it then needs to choose how to process the request. If Azure Front Door Standard/Premium has a cached response available for the matched routing rule, then the request gets served back to the client. The next thing Azure Front Door Standard/Premium evaluates is whether or not you have a Rule Set for the matched routing rule. If there isn't a Rule Set defined, then the request gets forwarded to the backend pool as is. Otherwise, the Rule Set gets executed in the order as they're configured.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor Concept Rule Set Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-rule-set-actions.md
+
+ Title: Configure Azure Front Door Standard/Premium rule set actions
+description: This article provides a list of the various actions you can do with Azure Front Door rule set.
++++ Last updated : 02/18/2021+++
+# Azure Front Door Standard/Premium Rule Set Actions
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+An Azure Front Door [Rule Set](concept-rule-set.md) consist of rules with a combination of match conditions and actions. This article provides a detailed description of the actions you can use in a Rule Set. The action defines the behavior that gets applied to a request type that a match condition(s) identifies. In an Azure Front Door Rule Set, a rule can contain up to five actions. Server variable is supported on all actions.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The following actions are available to use in Azure Front Door rule set.
+
+## Cache expiration
+
+Use this action to overwrite the time to live (TTL) value of the endpoint for requests that the rules match conditions specify.
+
+### Required fields
+
+The following description applies when selecting these cache behaviors and the rule matches:
+
+Cache behavior | Description
+|-
+Bypass cache | The content isn't cached.
+Override | The TTL value returned from your origin is overwritten with the value specified in the action. This behavior will only be applied if the response is cacheable. For cache-control response header with values "no-cache", "private", "no-store", the action won't be applicable.
+Set if missing | If no TTL value gets returned from your origin, the rule sets the TTL to the value specified in the action. This behavior will only be applied if the response is cacheable. For cache-control response header with values "no-cache", "private", "no-store", the action won't be applicable.
+
+### Additional fields
+
+Days | Hours | Minutes | Seconds
+--|-||--
+Int | Int | Int | Int
+
+## Cache key query string
+
+Use this action to modify the cache key based on query strings.
+
+### Required fields
+
+The following description applies when selecting these behaviors and the rule matches:
+
+Behavior | Description
+|
+Include | Query strings specified in the parameters get included when the cache key gets generated.
+Cache every unique URL | Each unique URL has its own cache key.
+Exclude | Query strings specified in the parameters get excluded when the cache key gets generated.
+Ignore query strings | Query strings aren't considered when the cache key gets generated.
+
+## Modify request header
+
+Use this action to modify headers that are present in requests sent to your origin.
+
+### Required fields
+
+The following description applies when selecting these actions and the rule matches:
+
+Action | HTTP header name | Value
+-||
+Append | The header specified in **Header name** gets added to the request with the specified value. If the header is already present, the value is appended to the existing value. | String
+Overwrite | The header specified in **Header name** gets added to the request with the specified value. If the header is already present, the specified value overwrites the existing value. | String
+Delete | If the header specified in the rule is present, the header gets deleted from the request. | String
+
+## Modify response header
+
+Use this action to modify headers that are present in responses returned to your clients.
+
+### Required fields
+
+The following description applies when selecting these actions and the rule matches:
+
+Action | HTTP Header name | Value
+-||
+Append | The header specified in **Header name** gets added to the response by using the specified **Value**. If the header is already present, **Value** is appended to the existing value. | String
+Overwrite | The header specified in **Header name** gets added to the response by using the specified **Value**. If the header is already present, **Value** overwrites the existing value. | String
+Delete | If the header specified in the rule is present, the header gets deleted from the response. | String
+
+## URL redirect
+
+Use this action to redirect clients to a new URL.
+
+### Required fields
+
+Field | Description
+|
+Redirect Type | Select the response type to return to the requestor: Found (302), Moved (301), Temporary redirect (307), and Permanent redirect (308).
+Redirect protocol | Match Request, HTTP, HTTPS.
+Destination host | Select the host name you want the request to be redirected to. Leave blank to preserve the incoming host.
+Destination path | Define the path to use in the redirect. Leave blank to preserve the incoming path.
+Query string | Define the query string used in the redirect. Leave blank to preserve the incoming query string.
+Destination fragment | Define the fragment to use in the redirect. Leave blank to preserve the incoming fragment.
+
+## URL rewrite
+
+Use this action to rewrite the path of a request that's en route to your origin.
+
+### Required fields
+
+Field | Description
+|
+Source pattern | Define the source pattern in the URL path to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (**/**) as the source pattern value.
+Destination | Define the destination path to use in the rewrite. The destination path overwrites the source pattern.
+Preserve unmatched path | If set to **Yes**, the remaining path after the source pattern is appended to the new destination path.
+
+## Server Variable
+
+### Supported Variables
+
+| Variable name | Description |
+| -- | :-- |
+| socket_ip | The IP address of the direct connection to Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. |
+| client_ip | The IP address of the client that made the original request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. |
+| client_port | The IP port of the client that made the request. |
+| hostname | The host name in the request from client. |
+| geo_country | Indicates the requester's country/region of origin through its country/region code. |
+| http_method | The method used to make the URL request. For example, GET or POST. |
+| http_version | The request protocol. Usually HTTP/1.0, HTTP/1.1, or HTTP/2.0. |
+| query_string | The list of variable/value pairs that follows the "?" in the requested URL. Example: in the request *http://contoso.com:8080/article.aspx?id=123&title=fabrikam*, query_string value will be *id=123&title=fabrikam* |
+| request_scheme | The request scheme: http or https. |
+| request_uri | The full original request URI (with arguments). Example: in the request *http://contoso.com:8080/article.aspx?id=123&title=fabrikam*, request_uri value will be */article.aspx?id=123&title=fabrikam* |
+| server_port | The port of the server that accepted a request. |
+| ssl_protocol | The protocol of an established TLS connection. |
+| url_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: in the request *http://contoso.com:8080/article.aspx?id=123&title=fabrikam*, uri_path value will be */article.aspx* |
+
+### Server Variable Format
+
+**Format:** {variable:offset}, {variable:offset:length}, {variable}
+
+### Supported server variable actions
+
+* Request header
+* Response header
+* Cache key query string
+* URL rewrite
+* URL redirect
+
+## Next steps
+
+* Learn more about [Azure Front Door Stanard/Premium Rule Set](concept-rule-set.md).
+* Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md).
frontdoor Concept Rule Set Match Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-rule-set-match-conditions.md
+
+ Title: Configure Azure Front Door Standard/Premium rule set match conditions
+description: This article provides a list of the various match conditions available with Azure Front Door Standard/Premium rule set.
++++ Last updated : 02/18/2021+++
+# Azure Front Door Standard/Premium (Preview) Rule Set match conditions
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This tutorial shows you how to create a Rule Set with your first set of rules in the Azure portal. In Azure Front Door Standard/Premium [Rule Set](concept-rule-set.md), a rule consists of zero or more match conditions and an action. This article provides detailed descriptions of the match conditions you can use in Azure Front Door Standard/Premium Rule Set.
+
+The first part of a rule is a match condition or set of match conditions. A rule can consist of up to 10 match conditions. A match condition identifies specific types of requests for which defined actions are done. If you use multiple match conditions, the match conditions are grouped together by using AND logic. For all match conditions that support multiple values (noted as "space-separated"), the "OR" operator is assumed.
+
+For example, you can use a match condition to:
+
+* Filter requests based on a specific IP address, country, or region.
+* Filter requests by header information.
+* Filter requests from mobile devices or desktop devices.
+* Filter requests from request file name and file extension.
+* Filter requests from request URL, protocol, path, query string, post args, etc.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The following match conditions are available to use in Azure Front Door Standard/Premium Rules Set:
+
+## Device type
+
+Identifies requests made from a mobile device or desktop device.
+
+#### Required fields
+
+Operator | Supported values
+|-
+Equals, Not equals | Mobile, Desktop
+
+## Post argument
+
+Identifies requests based on arguments defined for the POST request method that's used in the request.
+
+#### Required fields
+
+Argument name | Operator | Argument value | Case transform
+--|-|-|
+String | [Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+## Query string
+
+Identifies requests that contain a specific query string parameter. This parameter is set to a value that matches a specific pattern. Query string parameters (for example, **parameter=value**) in the request URL determine whether this condition is met. This match condition identifies a query string parameter by its name and accepts one or more values for the parameter value.
+
+#### Required fields
+
+Operator | Query string | Case Transform
+|--|
+[Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+## Remote address
+
+Identifies requests based on the requester's location or IP address.
+
+#### Required fields
+
+Operator | Supported values
+|--
+Geo Match | Country code
+IP Match | IP address (space-separated)
+Not Geo Match | Country code
+Not IP Match | IP address (space-separated)
+
+#### Key information
+
+* Use CIDR notation.
+* For multiple IP addresses and IP address blocks, 'OR' logic is operated.
+ * **IPv4 example**: if you add two IP addresses *1.2.3.4* and *10.20.30.40*, the condition is matched if any requests that arrive from either address 1.2.3.4 or 10.20.30.40.
+ * **IPv6 example**: if you add two IP addresses *1:2:3:4:5:6:7:8* and *10:20:30:40:50:60:70:80*, the condition is matched if any requests that arrive from either address 1:2:3:4:5:6:7:8 or 10:20:30:40:50:60:70:80.
+* The syntax for an IP address block is the base IP address followed by a forward slash and the prefix size. For example:
+ * **IPv4 example**: *5.5.5.64/26* matches any requests that arrive from addresses 5.5.5.64 through 5.5.5.127.
+ * **IPv6 example**: *1:2:3:/48* matches any requests that arrive from addresses 1:2:3:0:0:0:0:0 through 1:2:3: ffff:ffff:ffff:ffff:ffff.
+
+## Request body
+
+Identifies requests based on specific text that appears in the body of the request.
+
+#### Required fields
+
+Operator | Request body | Case transform
+|--|
+[Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+## Request header
+
+Identifies requests that use a specific header in the request.
+
+#### Required fields
+
+Header name | Operator | Header value | Case transform
+|-|--|
+String | [Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+## Request method
+
+Identifies requests that use the specified request method.
+
+#### Required fields
+
+Operator | Supported values
+|-
+Equals, Not equals | GET, POST, PUT, DELETE, HEAD, OPTIONS, TRACE
+
+#### Key information
+
+Only the GET request method can generate cached content in Azure Front Door. All other request methods are proxied through the network.
+
+## Request protocol
+
+Identifies requests that use the specified protocol used.
+
+#### Required fields
+
+Operator | Supported values
+|-
+Equals, Not equals | HTTP, HTTPS
+
+## Request URL
+
+Identifies requests that match the specified URL.
+
+#### Required fields
+
+Operator | Request URL | Case transform
+|-|
+[Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+#### Key information
+
+When you use this rule condition, be sure to include protocol information. For example: *https://www.\<yourdomain\>.com*.
+
+## Request file extension
+
+Identifies requests that include the specified file extension in the file name in the requesting URL.
+
+#### Required fields
+
+Operator | Extension | Case transform
+|--|
+[Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+#### Key information
+
+For extension, don't include a leading period; for example, use *html* instead of *.html*.
+
+## Request file name
+
+Identifies requests that include the specified file name in the requesting URL.
+
+#### Required fields
+
+Operator | File name | Case transform
+|--|
+[Operator list](#operator-list)| String, Int | Lowercase, Uppercase
+
+## Request path
+
+Identifies requests that include the specified path in the requesting URL.
+
+#### Required fields
+
+Operator | Value | Case Transform
+|-|
+[Operator list](#operator-list) | String, Int | Lowercase, Uppercase
+
+## <a name = "operator-list"></a>Operator list
+
+For rules that accept values from the standard operator list, the following operators are valid:
+
+* Any
+* Equals
+* Contains
+* Begins with
+* Ends with
+* Less than
+* Less than or equals
+* Greater than
+* Greater than or equals
+* Not any
+* Not contains
+* Not begins with
+* Not ends with
+* Not less than
+* Not less than or equals
+* Not greater than
+* Not greater than or equals
+* Regular Expression
+
+For numeric operators like *Less than* and *Greater than or equals*, the comparison used is based on length. The value in the match condition should be an integer that equals the length you want to compare.
+
+## Regular Expression
+
+Regex doesn't support the following operations:
+
+* Backreferences and capturing subexpressions
+* Arbitrary zero-width assertions
+* Subroutine references and recursive patterns
+* Conditional patterns
+* Backtracking control verbs
+* The \C single-byte directive
+* The \R newline match directive
+* The \K start of match reset directive
+* Callouts and embedded code
+* Atomic grouping and possessive quantifiers
+
+## Next steps
+
+* Learn more about [Rule Set](concept-rule-set.md).
+* Learn how to [configure your first Rules Set](how-to-configure-rule-set.md).
+* Learn more about [Rule Set actions](concept-rule-set-actions.md).
frontdoor Concept Rule Set Url Redirect And Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-rule-set-url-redirect-and-rewrite.md
+
+ Title: 'URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)'
+description: This article helps you understand how Azure Front Door supports URL redirection and URL rewrite using Azure Front Door Rule Set.
++++ Last updated : 02/18/2021+++
+# URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article helps you understand how Azure Front Door Standard/Premium supports URL redirect and URL rewrite used in a Rule Set.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## URL redirect
+
+Azure Front Door can redirect traffic at each of the following levels: protocol, hostname, path, query string, and fragment. These functionalities can be configured for individual micro-service since the redirection is path-based. With URL redirect you can simplify application configuration by optimizing resource usage, and supports new redirection scenarios including global and path-based redirection.
+
+You can configure URL redirect via Rule Set.
++
+### Redirection types
+A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
+
+* **301 (Moved permanently)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
+* **302 (Found)**: Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests.
+* **307 (Temporary redirect)**: Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests.
+* **308 (Permanent redirect)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs.
+
+### Redirection protocol
+You can set the protocol that will be used for redirection. The most common use cases of the redirect feature, is to set HTTP to HTTPS redirection.
+
+* **HTTPS only**: Set the protocol to HTTPS only, if you're looking to redirect the traffic from HTTP to HTTPS. Azure Front Door recommends that you should always set the redirection to HTTPS only.
+* **HTTP only**: Redirects the incoming request to HTTP. Use this value only if you want to keep your traffic HTTP that is, non-encrypted.
+* **Match request**: This option keeps the protocol used by the incoming request. So, an HTTP request remains HTTP and an HTTPS request remains HTTPS post redirection.
+
+### Destination host
+As part of configuring a redirect routing, you can also change the hostname or domain for the redirect request. You can set this field to change the hostname in the URL for the redirection or otherwise preserve the hostname from the incoming request. So, using this field you can redirect all requests sent on `https://www.contoso.com/*` to `https://www.fabrikam.com/*`.
+
+### Destination path
+For cases where you want to replace the path segment of a URL as part of redirection, you can set this field with the new path value. Otherwise, you can choose to preserve the path value as part of redirect. So, using this field, you can redirect all requests sent to `https://www.contoso.com/\*` to `https://www.contoso.com/redirected-site`.
+
+### Query string parameters
+You can also replace the query string parameters in the redirected URL. To replace any existing query string from the incoming request URL, set this field to 'Replace' and then set the appropriate value. Otherwise, you can keep the original set of query strings by setting the field to 'Preserve'. As an example, using this field, you can redirect all traffic sent to `https://www.contoso.com/foo/bar` to `https://www.contoso.com/foo/bar?&utm_referrer=https%3A%2F%2Fwww.bing.com%2F`.
+
+### Destination fragment
+The destination fragment is the portion of URL after '#', which is used by the browser to land on a specific section of a web page. You can set this field to add a fragment to the redirect URL.
+
+## URL rewrite
+
+Azure Front Door supports URL rewrite to rewrite the path of a request that's en route to your origin. URL rewrite allows you to add conditions to ensure that the URL or the specified headers get rewritten only when certain conditions get met. These conditions are based on the request and response information.
+
+With this feature, you can redirect users to different origins based on scenario, device type, and requested file type.
+
+You can configure URL redirect via Rule Set.
++
+### Source pattern
+
+Source pattern is the URL path in the source request to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (/) as the source pattern value.
+
+### Destination
+
+You can define the destination path to use in the rewrite. The destination path overwrites the source pattern.
+
+### Preserve unmatched path
+
+Preserve unmatched path allows you to append the remaining path after the source pattern to the new path.
+
+For example, if I set **Preserve unmatched path to Yes**.
+* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
+
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
+
+For example, if I set **Preserve unmatched path to No**.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
+
+## Next steps
+
+* Learn more about [Azure Front Door Standard/Premium Rule Set](concept-rule-set.md).
frontdoor Concept Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-rule-set.md
+
+ Title: 'Azure Front Door: Rule set'
+description: This article provides an overview of the Azure Front Door Standard/Premium Rules Set feature.
++++ Last updated : 02/18/2021+++
+# What is a Rule Set for Azure Front Door Standard/Premium (Preview)?
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+A Rule Set is a customized rule engine that groups a combination of rules into a single set that you can associate with multiple routes. The Rule Set allows you to customize how requests get processed at the edge and how Azure Front Door handles those requests.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Common supported scenarios
+
+* Implementing security headers to prevent browser-based vulnerabilities like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, X-Frame-Options, and Access-Control-Allow-Origin headers for Cross-Origin Resource Sharing (CORS) scenarios. Security-based attributes can also be defined with cookies.
+
+* Route requests to mobile or desktop versions of your application based on the client device type.
+
+* Using redirect capabilities to return 301, 302, 307, and 308 redirects to the client to direct them to new hostnames, paths, query string, or protocols.
+
+* Dynamically modify the caching configuration of your route based on the incoming requests.
+
+* Rewrite the request URL path and forwards the request to the appropriate origin in your configured origin group.
+
+* Add, modify, or remove request/response header to hide sensitive information or capture important information through headers.
+
+* Support server variables to dynamically change the request/response headers or URL rewrite paths/query strings, for example, when a new page load or when a form is posted. Server variable is currently supported on **[Rule Set actions](concept-rule-set-actions.md)** only.
+
+## Architecture
+
+Rule Set handles requests at the edge. When a request arrives at your Azure Front Door Standard/Premium endpoint, WAF is executed first, followed by the settings configured in Route. Those settings include the Rule Set associated to the Route. Rule Sets are processed from top to bottom in the Route. The same applies to rules within a Rule Set. In order for all the actions in each rule to get executed, all the match conditions within a rule has to be satisfied. If a request doesn't match any of the conditions in your Rule Set configuration, then only configurations in Route will be executed.
+
+If **Stop evaluating remaining rules** gets checked, then all of the remaining Rule Sets associated with the Route aren't executed.
+
+### Example
+
+In the following diagram, WAF policies get executed first. A Rule Set gets configured to append a response header. Then the header changes the max-age of the cache control if the match condition gets met.
++
+## Terminology
+
+With Azure Front Door Rule Set, you can create a combination of Rules Set configuration, each composed of a set of rules. The following out lines some helpful terminologies you'll come across when configuring your Rule Set.
+
+For more quota limit, refer to [Azure subscription and service limits, quotas and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+
+* *Rules set*: A set of rules that gets associated to one or multiple [Routes](concept-route.md). Each configuration is limited to 25 rules. You can create up to 10 configurations.
+
+* *Rules Set Rule*: A rule composed of up to 10 match conditions and 5 actions. Rules are local to a Rule Set and cannot be exported to use across Rule Sets. Users can create the same rule in multiple Rule Sets.
+
+* *Match Condition*: There are many match conditions that can be utilized to parse your incoming requests. A rule can contain up to 10 match conditions. Match conditions are evaluated with an **AND** operator. *Regular expression is supported in conditions*. A full list of match conditions can be found in [Rule Set Condition](concept-rule-set-match-conditions.md).
+
+* *Action*: Actions dictate how AFD handles the incoming requests based on the matching conditions. You can modify caching behaviors, modify request headers/response headers, do URL rewrite and URL redirection. *Server variables are supported on Action*. A rule can contain up to 10 match conditions. A full list of actions can be found [Rule Set Actions](concept-rule-set-actions.md).
+
+## Next steps
+
+* Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
+* Learn how to configure your first [Rule Set](how-to-configure-rule-set.md).
+
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/create-front-door-portal.md
+
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile - Azure portal'
+description: This quickstart shows how to use Azure Front Door Standard/Premium Service for your highly available and high-performance global web application by using the Azure portal.
+++
+Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
+
+ms.devlang: na
+
+ na
+ Last updated : 02/18/2021+++
+# Quickstart: Create an Azure Front Door Standard/Premium profile - Azure portal
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+
+In this quickstart, you learn how to create an Azure Front Door Standard/Premium profile using the Azure portal. You can create the Azure Front Door Standard/Premium profile through *Quick Create* with basic configurations or through *Custom create* with more advanced configurations. With *Custom create* you deploy two Web Apps. Next, you create the Azure Front Door Standard/Premium profile using the two Web Apps as your origin. You'll then verify connectivity to your Web Apps using the Azure Front Door Standard/Premium frontend hostname.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create Front Door profile - Quick Create
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door Standard/Premium (Preview)*. Then select **Create**.
+
+1. On the **Compare offerings** page, select **Quick create**. Then select **Continue to create a Front Door**.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-quick-create.png" alt-text="Screenshot of compare offerings.":::
+
+1. On the **Create a front door profile** page, enter, or select the following settings.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-quick-create-2.png" alt-text="Screenshot of Front Door quick create page.":::
+
+ | Settings | Value |
+ | | |
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *contoso-appservice* in the text box.|
+ | **Name** | Give your profile a name. This example uses **contoso-afd-quickcreate**. |
+ | **Tier** | Select either Standard or Premium SKU. Standard SKU is content delivery optimized. Premium SKU builds on Standard SKU and is focused on security. See [Tier Comparison](tier-comparison.md). |
+ | **Endpoint name** | Enter a globally unique name for your endpoint. |
+ | **Origin type** | Select the type of resource for your origin. In this example, we select an App service as the origin that has Private Link enabled. |
+ | **Origin host name** | Enter the hostname for your origin. |
+ | **Enable Private Link** | If you want to have a private connection between your Azure Front Door and your origin. For more details, please refer to [Private link guidance](concept-private-link.md) and [Enable private link](how-to-enable-private-link.md).
+ | **Caching** | Select the check box if you want to cache contents closer to users globally using Azure Front Door's edge POPs and Microsoft network. |
+ | **WAF policy** | Select **Create new** or select an existing WAF policy from the dropdown if you want to enable this feature. |
+
+ > [!NOTE]
+ > When creating an Azure Front Door Standard/Premium profile, you must select an origin from the same subscription the Front Door is created in.
+
+1. Select **Review + Create** to get your Front Door profile up and running.
+
+ > [!NOTE]
+ > It may take a few mins for the configurations to be propagated to all edge POPs.
+
+1. Then click **Create** to get your Front Door profile deployed and running.
+
+1. If you enabled Private Link, go to your origin (App service in this example). Select **Networking** > **Configure Private Link**. Then select the pending request from Azure Front Door, and click Approve. After a few seconds, your application will be accessible through Azure Front Door in a secure manner.
+
+## Create Front Door profile - Custom Create
+
+### Create a web app with two instances as the origin
+
+If you already have an origin or an origin group configured, skip to Create a Front Door Standard/Premium (Preview) for your application.
+
+In this example, we create a web application with two instances that run in different Azure regions. Both the web application instances run in *Active/Active* mode, so either one can take traffic. This configuration differs from an *Active/Stand-By* configuration, where one acts as a failover.
+
+If you don't already have a web app, use the following steps to set up an example web app.
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+
+1. On the top left-hand side of the screen, select **Create a resource** > **WebApp**.
+
+1. On the **Basics** tab of **Create Web App** page, enter, or select the following information.
+
+ | Setting | Value |
+ | | |
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg1* in the text box.|
+ | **Name** | Enter a unique **Name** for your web app. This example uses *WebAppContoso-001*. |
+ | **Publish** | Select **Code**. |
+ | **Runtime stack** | Select **.NET Core 2.1 (LTS)**. |
+ | **Operating System** | Select **Windows**. |
+ | **Region** | Select **Central US**. |
+ | **Windows Plan** | Select **Create new** and enter *myAppServicePlanCentralUS* in the text box. |
+ | **Sku and size** | Select **Standard S1 100 total ACU, 1.75-GB memory**. |
+
+ :::image type="content" source="../media/create-front-door-portal/create-web-app.png" alt-text="Quick create front door premium SKU in the Azure portal":::
+
+1. Select **Review + create**, review the summary, and then select **Create**. It might take several minutes to deploy to a
+
+After your deployment is complete, create a second web app. Use the same settings as above, except for the following settings:
+
+| Setting | Value |
+| | |
+| **Resource group** | Select **Create new** and enter *FrontDoorQS_rg2*. |
+| **Name** | Enter a unique name for your Web App, in this example, *WebAppContoso-002*. |
+| **Region** | A different region, in this example, *South Central US* |
+| **App Service plan** > **Windows Plan** | Select **New** and enter *myAppServicePlanSouthCentralUS*, and then select **OK**. |
+
+### Create a Front Door Standard/Premium (Preview) for your application
+
+Configure Azure Front Door Standard/Premium (Preview) to direct user traffic based on lowest latency between the two web apps servers. Also secure your Front Door with Web Application Firewall.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the home page or the Azure menu, selectΓÇ»**+ Create a resource**. Search forΓÇ»*Front Door Standard/Premium (Preview)*. Then select **Create**.
+
+1. On the **Compare offerings** page, select **Custom create**. Then select **Continue to create a Front Door**.
+
+1. On the **Basics** tab, enter or select the following information, and then select **Next: Secret**.
+
+ | Setting | Value |
+ | | |
+ | **Subscription** | Select your subscription. |
+ | **Resource group** | Select **Create new** and enter *FrontDoorQS_rg0* in the text box. |
+ | **Resource group location** | Select **East US** |
+ | **Profile Name** | Enter a unique name in this subscription **Webapp-Contoso-AFD** |
+ | **Tier** | Select **Premium**. |
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-2.png" alt-text="Create Front Door profile":::
+
+1. *Optional*: **Secrets**. If you plan to use managed certificates, this step is optional. If you have an existing Key Vault in Azure that you plan to use to Bring Your Own Certificate for custom domain, then select **Add a certificate**. You can also add certificate in the management experience after creation.
+
+ > [!NOTE]
+ > You need to have the right permission to add the certificate from Azure Key Vault as a user.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-secret.png" alt-text="Screenshot of add a secret in custom create.":::
+
+1. In the **Endpoint** tab, select **Add an Endpoint** and give your endpoint a globally unique name. You can create multiple endpoints in your Azure Front Door Standard/Premium profile after you finish the create experience. This example uses *contoso-frontend*. Leave Origin response timeout (in seconds) and Status as default. Select **Add** to add the endpoint.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-endpoint.png" alt-text="Screenshot of add an endpoint.":::
+
+1. Next, add an Origin Group that contains your two web apps. Select **+ Add** to open **Add an origin group** page. For Name, enter *myOrignGroup*, then select **+ Add an origin**.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-group.png" alt-text="Screenshot of add an origin group.":::
+
+1. In the **Add an origin** page, enter, or select the information below. Then select **Add**.
+
+ | Setting | Value |
+ | | |
+ | **Name** | Enter **webapp1** |
+ | **Origin type** | Select **App services** |
+ | **Host name** | Select `WebAppContoso-001.azurewebsites.net` |
+ | **Origin host header** | Select `WebAppContoso-001.azurewebsites.net` |
+ | **Other fields** | Leave all other fields as default. |
+
+ > [!NOTE]
+ > When creating an Azure Front Door Standard/Premium profile, you must select an origin from the same subscription the Azure Front Door Standard/Premium is created in.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-1.png" alt-text="Screenshot of add more origins.":::
+
+1. Repeat step 8 to add the second origin webapp002. Select `webappcontoso-002.azurewebsite.net` as the **Origin host name** and **Origin host header**.
+
+1. On the **Add an origin group** page, you'll see two origins added, leave all other fields default.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-origin-group-2.png" alt-text="Screenshot of add an origin group page.":::
+
+1. Next, add a Route to map your frontend endpoint to the Origin group. This route forwards requests from the endpoint to myOriginGroup. Select **+ Add** on Route to configure a Route.
+
+1. On the **Add a route** page, enter, or select the information below. Then select **Add**.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-route-without-caching.png" alt-text="Add route without caching":::
+
+ | Setting | Value |
+ | | |
+ | **Name** | Enter **MyRoute** |
+ | **Domain** | Select `contoso-frontend.z01.azurefd.net` |
+ | **Host name** | Select `WebAppContoso-001.azurewebsites.net` |
+ | **Patterns to match** | Leave as default. |
+ | **Accepted protocols** | Leave as default. |
+ | **Redirect** | Leave it default for **Redirect all traffic to use HTTPS**. |
+ | **Origin group** | Select **MyOriginGroup**. |
+ | **Origin path** | Leave as default. |
+ | **Forwarding protocol** | Select **Match incoming request**. |
+ | **Caching** | Leave unchecked in this quickstart. If you want to have your contents cached on edges, select the check box for **Enable caching**. |
+ | **Rules** | Leave as default. After you create your front door profile, you can create custom rules and apply them to routes. |
+
+ >[!WARNING]
+ > **Ensure** that there is a route for each endpoint. An absence of a route can cause an endpoint to fail.
+
+1. Next, select **+ Add** on Security to add a WAF policy. Select **Add** New and give your policy a unique name. Select the check box for **Add bot protection**. Select the endpoint in **Domains**, then select **Add**.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-add-waf-policy-2.png" alt-text="add WAF policy":::
+
+1. SelectΓÇ»**Review + Create**, and thenΓÇ»**Create**. It takes a few mins for the configurations to be propagated to all edge POPs. Now you have your first Front Door profile and endpoint.
+
+ :::image type="content" source="../media/create-front-door-portal/front-door-custom-create-review.png" alt-text="Review custom create":::
+
+## Verify Azure Front Door
+
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created. In a browser, go to `contoso-frontend.z01.azurefd.net`. Your request will automatically get routed to the nearest server from the specified servers in the origin group.
+
+If you created these apps in this quickstart, you'll see an information page.
+
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the frontend address: `contoso-frontend.azurefd.net`.
+
+1. In the Azure portal, search for and select *App services*. Scroll down to find one of your web apps, **WebAppContoso-001** in this example.
+
+1. Select your web app, and then select **Stop**, and **Yes** to verify.
+
+1. Refresh your browser. You should see the same information page.
+
+ >[!TIP]
+ >There is a little bit of delay for these actions. You might need to refresh again.
+
+1. Find the other web app, and stop it as well.
+
+1. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="../media/create-front-door-portal/web-app-stopped-message.png" alt-text="Both instances of the web app stopped":::
+
+## Clean up resources
+
+After you're done, you can remove all the items you created. Deleting a resource group also deletes its contents. If you don't intend to use this Front Door, you should remove resources to avoid unnecessary charges.
+
+1. In the Azure portal, search for and select **Resource groups**, or select **Resource groups** from the Azure portal menu.
+
+1. Filter or scroll down to find a resource group, such as **FrontDoorQS_rg0**.
+
+1. Select the resource group, then select **Delete resource group**.
+
+ >[!WARNING]
+ >This action is irreversable.
+
+1. Type the resource group name to verify, and then select **Delete**.
+
+Repeat the procedure for the other two groups.
+
+## Next steps
+
+Advance to the next article to learn how to add a custom domain to your Front Door.
+> [!div class="nextstepaction"]
+> [Add a custom domain](how-to-add-custom-domain.md)
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/faq.md
+
+ Title: 'Azure Front Door: Frequently asked questions'
+description: This page provides answers to frequently asked questions about Azure Front Door Standard/Premium.
+++++ Last updated : 02/18/2021+++
+# Frequently asked questions for Azure Front Door Standard/Premium (Preview)
+
+This article answers common questions about Azure Front Door features and functionality.
+
+## General
+
+### What is Azure Front Door?
+
+Azure Front Door is a fast, reliable, and secure modern cloud CDN with intelligent threat protection. It provides static and dynamic content acceleration, global load balancing, and enhanced security for your global hyper-scale applications, APIs, websites, and cloud services with intelligent threat protection.
+
+### What features does Azure Front Door support?
+
+Azure Front Door supports:
+
+* Both static content and dynamic application acceleration
+* TLS/SSL offloading and end to end TLS
+* Web Application Firewall
+* Cookie-based session affinity
+* Url path-based routing
+* Free certificates and multiple domain managements
+
+For a full list of supported features, see [Overview of Azure Front Door](overview.md).
+
+### What is the difference between Azure Front Door and Azure Application Gateway?
+
+While both Front Door and Application Gateway are layer 7 (HTTP/HTTPS) load balancers, the primary difference is that Front Door is a global service. Application Gateway is a regional service. While Front Door can load balance between your different scale units/clusters/stamp units across regions, Application Gateway allows you to load balance between your VMs/containers that is within the scale unit.
+
+### When should we deploy an Application Gateway behind Front Door?
+
+The key scenarios why one should use Application Gateway behind Front Door are:
+
+* Front Door can do path-based load balancing only at the global level but if one wants to load balance traffic even further within their virtual network (VNET) then they should use Application Gateway.
+* Since Front Door doesn't work at a VM/container level, so it can't do Connection Draining. However, Application Gateway allows you to do Connection Draining.
+* With an Application Gateway behind Front Door, one can achieve 100% TLS/SSL offload and route only HTTP requests within their virtual network (VNET).
+* Front Door and Application Gateway both support session affinity. Front Door can direct ensuing traffic from a user session to the same cluster or backend in a given region. Application Gateway can direct affinitize the traffic to the same server within the cluster.
+
+### Can we deploy Azure Load Balancer behind Front Door?
+
+Azure Front Door needs a public VIP or a publicly available DNS name to route the traffic to. Deploying an Azure Load Balancer behind Front Door is a common use case.
+
+### What protocols does Azure Front Door support?
+
+Azure Front Door supports HTTP, HTTPS and HTTP/2.
+
+### How does Azure Front Door support HTTP/2?
+
+HTTP/2 protocol support is available to clients connecting to Azure Front Door only. The communication to backends in the backend pool is over HTTP/1.1. HTTP/2 support is enabled by default.
+
+### What resources are supported today as part of origin group?
+
+Origin group can be composed of Storage, Web App, Kubernetes instances, or any other custom hostname that has public connectivity. Azure Front Door requires that the origins are defined either via a public IP or a publicly resolvable DNS hostname. Members of origin group can be across zones, regions, or even outside of Azure as long as they have public connectivity.
+
+### What regions is the service available in?
+
+Azure Front Door is a global service and isn't tied to any specific Azure region. The only location you need to specify while creating a Front Door is for the resource group. That location is basically specifying where the metadata for the resource group will be stored. Front Door resource itself is created as a global resource and the configuration is deployed globally to all the POPs (Point of Presence).
+
+### What are the POP locations for Azure Front Door?
+
+Azure Front Door has the same list of POP (Point of Presence) locations as Azure CDN from Microsoft. For the complete list of our POPs, kindly refer [Azure CDN POP locations from Microsoft](../../cdn/cdn-pop-locations.md).
+
+### Is Azure Front Door a dedicated deployment for my application or is it shared across customers?
+
+Azure Front Door is a globally distributed multi-tenant service. The infrastructure for Front Door is shared across all its customers. By creating a Front Door profile, you're defining the specific configuration required for your application. Changes made to your Front Door doesn't affect other Front Door configurations.
+
+### Is HTTP->HTTPS redirection supported?
+
+Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](concept-rule-set-url-redirect-and-rewrite.md).
+
+### How do I lock down the access to my backend to only Azure Front Door?
+
+To lock down your application to only accept traffic from your specific Front Door, you'll need to set up IP ACLs for your backend. Then restrict the traffic of your backend to the specific value of the header 'X-Azure-FDID' sent by Front Door. These steps are detailed out as below:
+
+* Configure IP ACLing for your backends to accept traffic from Azure Front Door's backend IP address space and Azure's infrastructure services only. Refer to the IP details below for ACLing your backend:
+
+ * Refer *AzureFrontDoor.Backend* section in [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for Front Door's IPv4 backend IP address range. You can also use the service tag *AzureFrontDoor.Backend* in your [network security groups](../../virtual-network/network-security-groups-overview.md#security-rules).
+ * Azure's [basic infrastructure services](../../virtual-network/network-security-groups-overview.md#azure-platform-considerations) through virtualized host IP addresses: `168.63.129.16` and `169.254.169.254`.
+
+ > [!WARNING]
+ > Front Door's backend IP space may change later, however, we will ensure that before that happens, that we would have integrated with [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). We recommend that you subscribe to [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) for any changes or updates.
+
+* Do a GET operation on your Front Door with the API version `2020-01-01` or higher. In the API call, look for `frontdoorID` field. Filter on the incoming header '**X-Azure-FDID**' sent by Front Door to your backend with the value of the field `frontdoorID`. You can also find `Front Door ID` value under the Overview section from Front Door portal page.
+
+* Apply rule filtering in your backend web server to restrict traffic based on the resulting 'X-Azure-FDID' header value.
+
+ Here's an example for [Microsoft Internet Information Services (IIS)](https://www.iis.net/):
+
+ ``` xml
+ <?xml version="1.0" encoding="UTF-8"?>
+ <configuration>
+ <system.webServer>
+ <rewrite>
+ <rules>
+ <rule name="Filter_X-Azure-FDID" patternSyntax="Wildcard" stopProcessing="true">
+ <match url="*" />
+ <conditions>
+ <add input="{HTTP_X_AZURE_FDID}" pattern="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" negate="true" />
+ </conditions>
+ <action type="AbortRequest" />
+ </rule>
+ </rules>
+ </rewrite>
+ </system.webServer>
+ </configuration>
+ ```
+
+### Can the anycast IP change over the lifetime of my Front Door?
+
+The frontend anycast IP for your Front Door should typically not change and may remain static for the lifetime of the Front Door. However, there are **no guarantees** for the same. Kindly don't take any direct dependencies on the IP.
+
+### Does Azure Front Door support static or dedicated IPs?
+
+No, Azure Front Door currently doesn't support static or dedicated frontend anycast IPs.
+
+### Does Azure Front Door support x-forwarded-for headers?
+
+Yes, Azure Front Door supports the X-Forwarded-For, X-Forwarded-Host, and X-Forwarded-Proto headers. For X-Forwarded-For if the header was already present then Front Door appends the client socket IP to it. Else, it adds the header with the client socket IP as the value. For X-Forwarded-Host and X-Forwarded-Proto, the value is overridden.
+
+### How long does it take to deploy an Azure Front Door? Does my Front Door still work when being updated?
+
+A new Front Door creation or any updates to an existing Front Door takes about 3 to 5 minutes for global deployment. That means in about 3 to 5 minutes, your Front Door configuration will be deployed across all of our POPs globally.
+
+Note - Custom TLS/SSL certificate updates take about 30 minutes to be deployed globally.
+
+Any updates to routes or backend pools are seamless and will cause zero downtime (if the new configuration is correct). Certificate updates won't cause any outage, unless you're switching from 'Azure Front Door Managed' to 'Use your own cert' or the other way around.
++
+## Configuration
+
+### Can Azure Front Door load balance or route traffic within a virtual network?
+
+Azure Front Door (AFD) requires a public IP or a publicly resolvable DNS name to route traffic. Azure Front Door can't route directly to resources in a virtual network. You can use an Application Gateway or an Azure Load Balancer with a public IP to solve this problem.
+
+### What are the various timeouts and limits for Azure Front Door?
+
+Learn about all the documented [timeouts and limits for Azure Front Door](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-service-limits).
+
+### How long does it take for a rule to take effect after being added to the Front Door Rules Engine?
+
+The Rules Engine configuration takes about 10 to 15 minutes to complete an update. You can expect the rule to take effect as soon as the update is completed.
+
+### Can I configure Azure CDN behind my Front Door profile or Front Door behind my Azure CDN?
+
+Azure Front Door and Azure CDN can't be configured together because both services use the same Azure edge sites when responding to requests.
+
+## Performance
+
+### How does Azure Front Door support high availability and scalability?
+
+Azure Front Door is a globally distributed multi-tenant platform with huge amount of capacity to cater to your application's scalability needs. Delivered from the edge of Microsoft's global network, Front Door provides global load-balancing capability that allows you to fail over your entire application or even individual microservices across regions or different clouds.
+
+## TLS configuration
+
+### What TLS versions are supported by Azure Front Door?
+
+All Front Door profiles created after September 2019 use TLS 1.2 as the default minimum.
+
+Front Door supports TLS versions 1.0, 1.1 and 1.2. TLS 1.3 isn't yet supported.
+
+### What certificates are supported on Azure Front Door?
+
+To enable the HTTPS protocol on a Front Door custom domain, you can choose a certificate that gets managed by Azure Front Door or use your own certificate.
+The Front Door managed option provisions a standard TLS/SSL certificate via Digicert and stored in Front Door's Key Vault. If you choose to use your own certificate, then you can onboard a certificate from a supported CA and can be a standard TLS, extended validation certificate, or even a wildcard certificate. Self-signed certificates aren't supported.
+
+### Does Front Door support autorotation of certificates?
+
+For the Front Door managed certificate option, the certificates are autorotated by Front Door. If you're using a Front Door managed certificate and see that the certificate expiry date is less than 60 days away, file a support ticket.
+
+For your own custom TLS/SSL certificate, autorotation isn't supported. Similar to how you set up the first time for a given custom domain, you'll need to point Front Door to the right certificate version in your Key Vault. Ensure that the service principal for Front Door still has access to the Key Vault. This updated certificate rollout operation by Front Door doesn't cause any production down time provided the subject name or SAN for the certificate doesn't change.
+
+### What are the current cipher suites supported by Azure Front Door?
+
+For TLS1.2 the following cipher suites are supported:
+
+- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+
+Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported:
+
+- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+- TLS_RSA_WITH_AES_256_GCM_SHA384
+- TLS_RSA_WITH_AES_128_GCM_SHA256
+- TLS_RSA_WITH_AES_256_CBC_SHA256
+- TLS_RSA_WITH_AES_128_CBC_SHA256
+- TLS_RSA_WITH_AES_256_CBC_SHA
+- TLS_RSA_WITH_AES_128_CBC_SHA
+- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
+- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
+
+### Can I configure TLS policy to control TLS Protocol versions?
+
+You can configure a minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or the [Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2.
+
+### Can I configure Front Door to only support specific cipher suites?
+
+No, configuring Front Door for specific cipher suites isn't supported. You can get your own custom TLS/SSL certificate from your Certificate Authority (say Verisign, Entrust, or Digicert). Then have specific cipher suites marked on the certificate when you generate it.
+
+### Does Front Door support OCSP stapling?
+
+Yes, OCSP stapling is supported by default by Front Door and no configuration is required.
+
+### Does Azure Front Door also support re-encryption of traffic to the backend?
+
+Yes, Azure Front Door supports TLS/SSL offload and end to end TLS, which re-encrypts the traffic to the backend. Since the connections to the backend happen over the public IP, it's recommended that you configure your Front Door to use HTTPS as the forwarding protocol.
+
+### Does Front Door support self-signed certificates on the backend for HTTPS connection?
+
+No, self-signed certificates aren't supported on Front Door and the restriction applies to both:
+
+* **Backends**: You can't use self-signed certificates when you're forwarding the traffic as HTTPS or HTTPS health probes or filling the cache for from origin for routing rules with caching enabled.
+* **Frontend**: You can't use self-signed certificates when using your own custom TLS/SSL certificate for enabling HTTPS on your custom domain.
+
+### Why is HTTPS traffic to my backend failing?
+
+For having successful HTTPS connections to your backend whether for health probes or for forwarding requests, there could be two reasons why HTTPS traffic might fail:
+
+* **Certificate subject name mismatch**: For HTTPS connections, Front Door expects that your backend presents certificate from a valid CA with subject name(s) matching the backend hostname. As an example, if your backend hostname is set to `myapp-centralus.contosonews.net` and the certificate that your backend presents during the TLS handshake doesn't have `myapp-centralus.contosonews.net` or `*myapp-centralus*.contosonews.net` in the subject name. Then Front Door will refuse the connection and result in an error.
+ * **Solution**: It isn't recommended from a compliance standpoint but you can work around this error by disabling the certificate subject name check for your Front Door. You can find this option under Settings in Azure portal and under BackendPoolsSettings in the API.
+* **Backend hosting certificate from invalid CA**: Only certificates from [valid Certificate Authorities](troubleshoot-allowed-certificate-authority.md) can be used at the backend with Front Door. Certificates from internal CAs or self-signed certificates aren't allowed.
+
+### Can I use client/mutual authentication with Azure Front Door?
+
+No. Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in [RFC 5246](https://tools.ietf.org/html/rfc5246), currently, Azure Front Door doesn't support client/mutual authentication.
+
+## Diagnostics and logging
+
+### What types of metrics and logs are available with Azure Front Door?
+
+For information on logs and other diagnostic capabilities, see Monitoring metrics and logs for Front Door.
+
+### What is the retention policy on the diagnostics logs?
+
+Diagnostic logs flow to the customers storage account and customers can set the retention policy based on their preference. Diagnostic logs can also be sent to an Event Hub or Azure Monitor logs. For more information, see [Azure Front Door Logging](how-to-logs.md).
+
+### How do I get audit logs for Azure Front Door?
+
+Audit logs are available for Azure Front Door. In the portal, select **Activity Log** in the menu page of your Front Door to access the audit log.
+
+### Can I set alerts with Azure Front Door?
+
+Yes, Azure Front Door does support alerts. Alerts are configured on metrics.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
+
+ Title: How to add a custom domain to your Azure Front Door Standard/Premium SKU configuration
+description: In this tutorial, you'll learn how to onboard a custom domain to Azure Front Door Standard/Premium SKU.
+
+documentationcenter: ''
++++ Last updated : 02/18/2021+
+#Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
++
+# Create a custom domain on Azure Front Door Standard/Premium SKU (Preview) using the Azure portal
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+When you use Azure Front Door Standard/Premium for application delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes.
+
+After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of azurefd.net. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door Standard/Premium owned domain name. For example, https://www.contoso.com/photo.png.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
+
+* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).
+
+* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
+
+## Add a new custom domain
+
+A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
+
+1. Under Settings for your Azure Front Door profile, select *Domains* and then the **Add a domain** button.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-button.png" alt-text="Screenshot of add domain button on domain landing page.":::
+
+1. The **Add a domain** page will appear where you can enter information about of the custom domain. You can choose Azure-managed DNS, which is recommended or you can choose to use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone and then select a custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Select **Add** to add your custom domain.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page.":::
+
+ A new custom domain is created with a validation state of **Submitting**.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/validation-state-submitting.png" alt-text="Screenshot of domain validation state submitting.":::
+
+ Wait until the validation state changes to **Pending**. This operation could take a few mins.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/validation-state-pending.png" alt-text="Screenshot of domain validation state pending.":::
+
+1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`. If you're using Azure DNS-based zone, select the **Add** button and a new TXT record with the displayed record value will be created in the Azure DNS zone. If you're using another DNS provider, manually create a new TXT record of name `dnsauth.<your_subdomain>` with the record value as shown on the page.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/validate-custom-domain.png" alt-text="Screenshot of validate custom domain page.":::
+
+1. Select the refresh status. Once the domain is validated using the DNS TXT record, the validation status will change to **verified**. This operation may take a few minutes to validate.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/domain-status-verified.png" alt-text="Screenshot of custom domain verified.":::
+
+1. Close the page to return to custom domains list landing page. The provisioning state of custom domain should change to **Provisioned** and validation state should change to **Approved**.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/provisioned-approved-status.png" alt-text="Screenshot of provisioned and approved status.":::
+
+## Associate the custom domain with your Front Door Endpoint
+
+After you've validated your custom domain, you can then add it to your Azure Front Door Standard/Premium endpoint.
+
+1. Once custom domain is validated, you can associate it to an existing Azure Front Door Standard/Premium endpoint and route. Select the **Endpoint association** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate with. Then select **Associate**. Close the page once the associate operation completes.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/associate-endpoint-routes.png" alt-text="Screenshot of associate endpoint and routes page.":::
+
+ The Endpoint association status should change to reflect the endpoint to which the custom domain is currently associated.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/endpoint-association-status.png" alt-text="Screenshot of endpoint association link.":::
+
+1. Select the DNS state link.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/dns-state-link.png" alt-text="Screenshot of DNS state link.":::
+
+1. The **Add or update the CNAME record** page will appear and display the CNAME record information that must be provided before traffic can start flowing. If you're using Azure DNS hosted zones, the CNAME records can be created by selecting the **Add** button on the page. If you're using another DNS provider, you must manually enter the CNAME record name and value as shown on the page.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/add-update-cname-record.png" alt-text="Screenshot of add or update CNAME record.":::
+
+1. Once the CNAME record gets created and the custom domain is associated to the Azure Front Door endpoint completes, traffic flow will start flowing.
+
+ > [!NOTE]
+ > If HTTPS is enabled, certificate provisioning and propagation may take a few minutes because propagation is being done to all edge locations.
+
+## Verify the custom domain
+
+After you've validated and associated the custom domain, verify that the custom domain is correctly referenced to your endpoint.
++
+Then lastly, validate that your application content is getting served using a browser.
+
+## Next steps
+
+To learn how to enable HTTPS for your custom domain, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Enable HTTPS for a custom domain]()
frontdoor How To Add Security Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-add-security-headers.md
+
+ Title: Configure security headers with Azure Front Door Standard/Premium (Preview) Rule Set
+description: This article provides guidance on how to use rule set to configure security headers.
++++ Last updated : 02/18/2021+++
+# Configure security headers with Azure Front Door Standard/Premium (Preview) Rule Set
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article shows how to implement security headers to prevent browser-based vulnerabilities like HTTP Strict-Transport-Security (HSTS), X-XSS-Protection, Content-Security-Policy, or X-Frame-Options. Security-based attributes can also be defined with cookies.
+
+The following example shows you how to add a Content-Security-Policy header to all incoming requests that matches the path in the Route. Here, we only allow scripts from our trusted site, **https://apiphany.portal.azure-api.net** to run on our application.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* Before you can configure configure security headers, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](create-front-door-portal.md).
+* Review how to [Set up a Rule Set](how-to-configure-rule-set.md) if you haven't used the Rule Set feature before.
+
+## Add a Content-Security-Policy header in Azure portal
+
+1. Go to the Azure Front Door Standard/Premium profile and select **Rule Set** under **Settings.**
+
+1. Select **Add** to add a new rule set. Give the Rule Set a **Name** and then provide a **Name** for the rule. Select **Add an Action** and then select **Response Header**.
+
+1. Set the operator to **Append** to add this header as a response to all of the incoming requests for this route.
+
+1. Add the header name: **Content-Security-Policy** and define the values this header should accept. In this scenario, we choose *"script-src 'self' https://apiphany.portal.azure-api.net"*.
+
+1. Once you've added all of the rules you'd like to your configuration, don't forget to associate the rule set with a route. This step is *required* to allow the rule set to take action.
+
+> [!NOTE]
+> In this scenario, we did not add [match conditions](concept-rule-set-match-conditions.md) to the rule. All incoming requests that match the path defined in the associated route will have this rule applied. If you would like it to only apply to a subset of those requests, be sure to add your specific **match conditions** to this rule.
+
+## Clean up resources
+
+### Deleting a Rule
+
+In the preceding steps, you configured Content-Security-Policy header with Rule set. If you no longer want a rule, you can select the Rule Set name and then select Delete rule.
+
+### Deleting a Rule Set
+
+If you want to delete a Rule Set, make sure you disassociate it from all routes before deleting. For detailed guidance on deleting a rule set, refer to [Configure your rule set](how-to-configure-rule-set.md).
+
+## Next steps
+
+To learn how to configure a Web Application Firewall for your Front Door, see [Web Application Firewall and Front Door](../../web-application-firewall/afds/afds-overview.md).
frontdoor How To Cache Purge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-cache-purge.md
+
+ Title: 'Cache purging in Azure Front Door Standard/Premium (Preview)'
+description: This article helps you understand how to purge cache on an Azure Front Door Standard/Premium.
++++++ Last updated : 02/18/2021+++
+# Cache purging in Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door Standard/Premium caches assets until the asset's time-to-live (TTL) expires. Whenever a client requests an asset with expired TTL, the Azure Front Door environment retrieves a new updated copy of the asset to serve the request and then stores the refreshed cache.
+
+Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application or you want to update assets that contain incorrect information.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+Review [Azure Front Door Caching](concept-caching.md) to understand how caching works.
+
+## Configure cache purge
+
+1. Go to the overview page of the Azure Front Door profile with the assets you want to purge, then select **Purge cache**.
+
+ :::image type="content" source="../media/how-to-cache-purge/front-door-cache-purge-1.png" alt-text="Screenshot of cache purge on overview page.":::
+
+1. Select the endpoint and domain you want to purge from the edge nodes. *(You may select more than one domains)*
+
+ :::image type="content" source="../media/how-to-cache-purge/front-door-cache-purge-2.png" alt-text="Screenshot of cache purge page.":::
+
+1. To clear all assets, select **Purge all assets for the selected domains**. Otherwise, in **Paths**, enter the path of each asset you want to purge.
+
+These formats are supported in the lists of paths to purge:
+
+* **Single path purge**: Purge individual assets by specifying the full path of the asset (without the protocol and domain), with the file extension, for example, /pictures/strasbourg.png.
+* **Root domain purge**: Purge the root of the endpoint with "/*" in the path.
+
+Cache purges on the Azure Front Door Standard/Preium are case-insensitive. Additionally, they're query string agnostic, meaning purging a URL will purge all query-string variations of it.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
frontdoor How To Compression https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-compression.md
+
+ Title: Improve performance by compressing files in Azure Front Door Standard/Premium (Preview)
+description: Learn how to improve file transfer speed and increase page-load performance by compressing your files in Azure Front Door.
++++ Last updated : 02/18/2021+++
+# Improve performance by compressing files in Azure Front Door Standard/Premium (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+File compression is an effective method to improve file transfer speed and increase page-load performance. The compression reduces the size of the file before it's sent by the server. File compression can reduce bandwidth costs and provide a better experience for your users.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+There are two ways to enable file compression:
+
+- Enabling compression on your origin server. Azure Front Door passes along the compressed files and delivers them to clients that request them.
+- Enabling compression directly on the Azure Front Door POP servers (*compression on the fly*). In this case, Azure Front Door compresses the files and sends them to the end users.
+
+> [!IMPORTANT]
+> Azure Front Door configuration changes takes up to 10 mins to propagate throughout the network. If you're setting up compression for the first time for your CDN endpoint, consider waiting 1-2 hours before you troubleshoot to ensure the compression settings have propagated to all the POPs.
+
+## Enabling compression
+
+> [!Note]
+> In Azure Front Door, compression is part of **Enable Caching** in Route. Only when you **Enable Caching**, can you take advantage of compression in Azure Front Door.
+
+You can enable compression in the following ways:
+* During quick create - When you enable caching, you can enable compression.
+* During custom create - Enable caching and compression when you're adding a route.
+* In Endpoint Manager route.
+* On the Optimization page.
+
+### Enable compression in Endpoint manager
+
+1. From the Azure Front Door Standard/Premium profile page, go to **Endpoint Manager** and select the endpoint you want to enable compression.
+
+1. Select **Edit Endpoint**, then select the **route** you want to enable compression.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-1.png" alt-text="Screenshot of Endpoint Manager landing page." lightbox="../media/how-to-compression/front-door-compression-endpoint-manager-1-expanded.png":::
+
+1. Ensure **Enable Caching** is checked, then select the checkbox for **Enable compression**.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Enable compression in endpoint manager.":::
+
+1. Select **Update** to save the configuration.
+
+### Enable compression in Optimization
+
+1. From the Azure Front Door Standard/Premium profile page, go to **Optimizations** under Settings. Expand the endpoint to see the list of routes.
+
+1. Select the three dots next to the **route** that has compression *Disabled*. Then select **Configure route**.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-optimization-1.png" alt-text="Screen of enable compression on the optimization page." lightbox="../media/how-to-compression/front-door-compression-optimization-1-expanded.png":::
+
+1. Ensure **Enable Caching** is checked, then select the checkbox for **Enable compression**.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-endpoint-manager-2.png" alt-text="Screen shot of enabling compression in endpoint manager.":::
+
+1. Click **Update**.
+
+## Modify compression content type
+
+You can modify the default list of MIME types on Optimizations page.
+
+1. From the Azure Front Door Standard/Premium profile page, go to **Optimizations** under Settings. Then select the **route** that has compression *Enabled*.
+
+1. Select the three dots next to the **route** that has compression *Enabled*. Then select **View Compressed file types**.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-edit-content-type.png" alt-text="Screenshot of optimization page." lightbox="../media/how-to-compression/front-door-compression-edit-content-type-expanded.png":::
+
+1. Delete default formats or select **Add** to add new content types.
+
+ :::image type="content" source="../media/how-to-compression/front-door-compression-edit-content-type-2.png" alt-text="Screenshot of customize file compression page.":::
+
+1. Select **Save**, to update compression configure .
+
+## Disabling compression
+
+You can disable compression in the following ways:
+* Disable compression in Endpoint manager route.
+* Disable compression in Optimization page.
+
+### Disable compression in Endpoint manager
+
+1. From the Azure Front Door Standard/Premium profile page, go to **Endpoint manager** under Settings. Select the endpoint you want to disable compression.
+
+1. Select **Edit Endpoint** and then select the **route** you want to disable compression. Uncheck the **Enable compression** box.
+
+1. Select **Update** to save the configuration.
+
+### Disable compression in Optimizations
+
+1. From the Azure Front Door Standard/Premium profile page, go to **Optimizations** under Settings. Then select the **route** that has compression *Enabled*.
+
+1. Select the three dots next to the **route** that has compression *Enabled*, then select *Configure route*.
+
+ :::image type="content" source="../media/how-to-compression/front-door-disable-compression-optimization.png" alt-text="Screenshot of disable compression in optimization page.":::
+
+1. Uncheck the **Enable compression** box.
+
+ :::image type="content" source="../media/how-to-compression/front-door-disable-compression-optimization-2.png" alt-text="Screenshot of update route page for disabling compression.":::
+
+1. Select **Update** to save the configuration.
+
+## Compression rules
+
+In Azure Front Door, only eligible files are compressed. To be eligible for compression, a file must:
+* Be of a MIME type
+* Be larger than 1 KB
+* Be smaller than 8 MB
+
+These profiles support the following compression encodings:
+* gzip (GNU zip)
+* brotli
+
+If the request supports more than one compression type, brotli compression takes precedence.
+
+When a request for an asset specifies gzip compression and the request results in a cache miss, Azure Front Door does gzip compression of the asset directly on the POP server. Afterward, the compressed file is served from the cache.
+
+If the origin uses Chunked Transfer Encoding (CTE) to send compressed data to the Azure Front Door POP, then response sizes greater than 8 MB aren't supported.
+
+## Next steps
+
+- Learn how to configure your first [Rules Set](how-to-configure-rule-set.md)
+- Learn more about [Rule Set Match Conditions](concept-rule-set-match-conditions.md)
+- Learn more about [Azure Front Door Rule Set](concept-rule-set.md)
frontdoor How To Configure Endpoint Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-endpoint-manager.md
+
+ Title: Configure Azure Front Door Standard/Premium endpoint with Endpoint Manager
+description: This article shows how to configure an endpoint with Endpoint Manager.
++++ Last updated : 02/18/2021+++
+# Configure an Azure Front Door Standard/Premium (Preview) endpoint with Endpoint Manager
+
+> [!NOTE]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View **[Azure Front Door Docs](../front-door-overview.md)**.
+
+This article shows you how to create an endpoint for an existing Azure Front Door Standard/Premium profile with Endpoint Manager.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+Before you can create an Azure Front Door Standard/Premium endpoint with Endpoint Manager, you must have created at least one Azure Front Door profile created. The profile has to have at least one or more Azure Front Door Standard/Premium endpoints. To organize your Azure Front Door Standard/Premium endpoints by internet domain, web application, or other criteria, you can use multiple profiles.
+
+To create an Azure Front Door profile, see [Create a new Azure Front Door Standard/Premium profile](create-front-door-portal.md).
+
+## Create a new Azure Front Door Standard/Premium Endpoint
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
+
+1. Select **Endpoint Manager**. Then select **Add an Endpoint** to create a new Endpoint.
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/select-create-endpoint.png" alt-text="Screenshot of add an endpoint through Endpoint Manager.":::
+
+1. On the **Add an endpoint** page, enter, and select the following settings.
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/create-endpoint-page.png" alt-text="Screenshot of add an endpoint page.":::
+
+ | Settings | Value |
+ | -- | -- |
+ | Name | Enter a unique name for the new Azure Front Door Standard/Premium endpoint. This name is used to access your cached resources at the domain `<endpointname>.az01.azurefd.net` |
+ | Origin Response timeout (secs) | Enter a timeout value in seconds that Azure Front Door will wait before considering the connection with origin has timeout. |
+ | Status | Select the checkbox to enable this endpoint. |
+
+## Add Domains, Origin Group, Routes, and Security
+
+1. Select **Edit Endpoint** at the endpoint to configure the route.
+
+1. On the **Edit Endpoint** page, select **+ Add** under Domains.
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/select-add-domain.png" alt-text="Screenshot of select domain on Edit Endpoint page.":::
+
+### Add Domain
+
+1. On the **Add Domain** page, choose to associate a domain *from your Azure Front Door profile* or *add a new domain*. For information about how to create a brand new domain, see [Create a new Azure Front Door Standard/Premium custom domain](how-to-add-custom-domain.md).
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/add-domain-page.png" alt-text="Screenshot of Add a domain page.":::
+
+1. Select **Add** to add the domain to current endpoint. The selected domain should appear within the Domain panel.
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/domain-in-domainview.png" alt-text="Screenshot of domains in domain view.":::
+
+### Add Origin Group
+
+1. Select **Add** at the Origin groups view. The **Add an origin group** page appears
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/add-origin-group-view.png" alt-text="Screenshot of add an origin group page":::
+
+1. For **Name**, enter a unique name for the new origin group
+
+1. Select **Add an Origin** to add a new origin to current group.
+
+#### Health Probes
+Front Door sends periodic HTTP/HTTPS probe requests to each of your origin. Probe requests determine the proximity and health of each origin to load balance your end-user requests. Health probe settings for an origin group define how we poll the health status of app origin. The following settings are available for load-balancing configuration:
+
+> [!WARNING]
+> Since Front Door has many edge environments globally, health probe volume for your origin can be quite high - ranging from 25 requests every minute to as high as 1200 requests per minute, depending on the health probe frequency configured. With the default probe frequency of 30 seconds, the probe volume on your origin should be about 200 requests per minute.
+
+* **Status**: Specify whether to turn on the health probing. If you have a single origin in your origin group, you can choose to disable the health probes reducing the load on your application backend. Even if you have multiple origins in the group but only one of them is in enabled state, you can disable health probes.
+
+* **Path**: The URL used for probe requests for all the origin in this origin group. For example, if one of your origins is contoso-westus.azurewebsites.net and the path is set to /probe/test.aspx, then Front Door environments, assuming the protocol is set to HTTP, will send health probe requests to `http://contoso-westus.azurewebsites.net/probe/test.aspx`.
+
+* **Protocol**: Defines whether to send the health probe requests from Front Door to your origin with HTTP or HTTPS protocol.
+
+* **Probe Method**: The HTTP method to be used for sending health probes. Options include GET or HEAD (default).
+
+ > [!NOTE]
+ > For lower load and cost on your origin, Front Door recommends using HEAD requests for health probes.
+
+* **Interval(in seconds)**: Defines the frequency of health probes to your origin, or the intervals in which each of the Front Door environments sends a probe.
+
+ >[!NOTE]
+ >For faster failovers, set the interval to a lower value. The lower the value, the higher the health probe volume your origin receive. For example, if the interval is set to 30 seconds with say, 100 Front Door POPs globally, each backend will receive about 200 probe requests per minute.
+
+#### Load balancing
+Load-balancing settings for the origin group define how we evaluate health probes. These settings determine if the backend is healthy or unhealthy. They also check how to load-balance traffic between different origins in the origin group. The following settings are available for load-balancing configuration:
+
+- **Sample size**. Identifies how many samples of health probes we need to consider for origin health evaluation.
+
+- **Successful sample size**. Defines the sample size as previously mentioned, the number of successful samples needed to call the origin healthy. For example, assume a Front Door health probe interval is 30 seconds, sample size is 5, and successful sample size is 3. Each time we evaluate the health probes for your origin, we look at the last five samples over 150 seconds (5 x 30). At least three successful probes are required to declare the backend as healthy.
+
+- **Latency sensitivity (extra latency)**. Defines whether you want Front Door to send the request to origin within the latency measurement sensitivity range or forward the request to the closest backend.
+
+Select **Add** to add the origin group to current endpoint. The origin group should appear within the Origin group panel
++
+### Add Route
+
+Select **Add** at the Routes view, the **Add a route** page appears. For information how to associate the domain and origin group, see [Create a new Azure Front Door route](how-to-configure-route.md)
+
+### Add Security
+
+1. Select **Add** at the Security view, The **Add a WAF policy** page appears
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/add-waf-policy-page.png" alt-text="Screenshot of add a WAF policy page.":::
+
+1. **WAF Policy**: select a WAF policy you like apply for the selected domain within this endpoint.
+
+ Select **Create New** to create a brand new WAF policy.
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/create-new-waf-policy.png" alt-text="Screenshot of create a new WAF policy.":::
+
+ **Name**: enter a unique name for the new WAF policy. You could edit this policy with more configuration from the Web Application Firewall page.
+
+ **Domains**: select the domain to apply the WAF policy.
+
+1. Select **Add** button. The WAF policy should appear within the Security panel
+
+ :::image type="content" source="../media/how-to-configure-endpoint-manager/waf-in-security-view.png" alt-text="Screenshot of WAF policy in security view.":::
+
+## Clean up resources
+
+To delete an endpoint when it's no longer needed, select **Delete Endpoint** at the end of the endpoint row
++
+## Next steps
+
+To learn about custom domains, continue to [Adding a custom domain](how-to-add-custom-domain.md).
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
+
+ Title: Configure HTTPS for your custom domain in an Azure Front Door Standard/Premium SKU configuration
+description: In this article, you'll learn how to onboard a custom domain to Azure Front Door Standard/Premium SKU.
+++++ Last updated : 02/18/2021+
+# As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
++
+# Configure HTTPS on a Front Door Standard/Premium SKU (Preview) custom domain using the Azure portal
+
+> [!NOTE]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door Standard/Premium enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+
+Azure Front Door Standard/Premium supports both Azure managed certificate and customer-managed certificates. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No additional steps are required for getting an Azure managed certificate. A certificate is created during the domain validation process. You can also use your own certificate by integrating Azure Front Door Standard/Premium with your Key Vault.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* Before you can configure HTTPS for your custom domain, you must first create an Azure Front Door Standard/Premium profile. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium profile](create-front-door-portal.md).
+
+* If you don't already have a custom domain, you must first purchase one with a domain provider. For example, see [Buy a custom domain name](../../app-service/manage-custom-dns-buy-domain.md).
+
+* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
+
+## Azure managed certificates
+
+1. Under Settings for your Azure Front Door Standard/Premium profile, select **Domains** and then select **+ Add** to add a new domain.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
+
+1. On the **Add a domain** page, for *DNS management* select the **Azure managed DNS** option.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-domain-azure-managed.png" alt-text="Screen shot of add a domain page with Azure managed DNS selected.":::
+
+1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
+
+1. Once the custom domain gets associated to endpoint successfully, an Azure managed certificate gets deployed to Front Door. This process may take a few minutes to complete.
+
+## Using your own certificate
+
+You can also choose to use your own TLS certificate. This certificate must be imported into an Azure Key Vault before you can use it with Azure Front Door Standard/Premium. See [import a certificate](../../key-vault/certificates/tutorial-import-certificate.md) to Azure Key Vault.
+
+#### Prepare your Azure Key vault account and certificate
+
+1. You must have a running Azure Key Vault account under the same subscription as your Azure Front Door Standard/Premium that you want to enable custom HTTPS. Create an Azure Key Vault account if you don't have one.
+
+ > [!WARNING]
+ > Azure Front Door currently only supports Key Vault accounts in the same subscription as the Front Door configuration. Choosing a Key Vault under a different subscription than your Azure Front Door Standard/Premium will result in a failure.
+
+1. If you already have a certificate, you can upload it directly to your Azure Key Vault account. Otherwise, create a new certificate directly through Azure Key Vault from one of the partner Certificate Authorities that Azure Key Vault integrates with. Upload your certificate as a **certificate** object, rather than a **secret**.
+
+ > [!NOTE]
+ > For your own TLS/SSL certificate, Front Door doesn't support certificates with EC cryptography algorithms.
+
+#### Register Azure Front Door
+
+Register the service principal for Azure Front Door as an app in your Azure Active Directory via PowerShell.
+
+> [!NOTE]
+> This action requires Global Administrator permissions, and needs to be performed only **once** per tenant.
+
+1. If needed, install [Azure PowerShell](/powershell/azure/install-az-ps) in PowerShell on your local machine.
+
+1. In PowerShell, run the following command:
+
+ `New-AzADServicePrincipal -ApplicationId 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8""`
+
+#### Grant Azure Front Door access to your key vault
+
+Grant Azure Front Door permission to access the certificates in your Azure Key Vault account.
+
+1. In your key vault account, under SETTINGS, select **Access policies**. Then select **Add new** to create a new policy.
+
+1. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose ** Microsoft.AzureFrontDoor-Cdn**. Click **Select**.
+
+1. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
+
+1. In **Certificate permissions**, select **Get** to allow Front Door to retrieve the certificate.
+
+1. Select **OK**.
+
+#### Select the certificate for Azure Front Door to deploy
+
+1. Return to your Azure Front Door Standard/Premium in the portal.
+
+1. Navigate to **Secrets** under *Settings* and select **Add certificate**.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate.png" alt-text="Screenshot of Azure Front Door secret landing page.":::
+
+1. On the **Add certificate** page, select the checkbox for the certificate you want to add to Azure Front Door Standard/Premium. Leave the version selection as "Latest" and select **Add**.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-certificate-page.png" alt-text="Screenshot of add certificate page.":::
+
+1. Once the certificate gets provisioned successfully, you can use it when you add a new custom domain.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/successful-certificate-provisioned.png" alt-text="Screenshot of certificate successfully added to secrets.":::
+
+1. Navigate to **Domains** under *Setting* and select **+ Add** to add a new custom domain. On the **Add a domain** page, choose
+"Bring Your Own Certificate (BYOC)" for *HTTPS*. For *Secret*, select the certificate you want to use from the drop-down.
+
+ > [!NOTE]
+ > The selected certificate must have a common name (CN) same as the custom domain being added.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-custom-domain-https.png" alt-text="Screenshot of add a custom domain page with HTTPS.":::
+
+1. Follow the on-screen steps to validate the certificate. Then associate the newly created custom domain to an endpoint as outlined in [creating a custom domain](how-to-add-custom-domain.md) guide.
+
+#### Change from Azure managed to Bring Your Own Certificate (BYOC)
+
+1. You can change an existing Azure managed certificate to a user-managed certificate by selecting the certificate state to open the **Certificate details** page.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/domain-certificate.png" alt-text="Screenshot of certificate state on domains landing page." lightbox="../media/how-to-configure-https-custom-domain/domain-certificate-expanded.png":::
+
+1. On the **Certificate details** page, you can change from "Azure managed" to "Bring Your Own Certificate (BYOC)" option. Then follow the same steps as earlier to choose a certificate. Select **Update** to change the associated certificate with a domain.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/certificate-details-page.png" alt-text="Screenshot of certificate details page.":::
+
+## Next steps
+
+Learn about [caching with Azure Front Door Standard/Premium](concept-caching.md).
frontdoor How To Configure Route https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-route.md
+
+ Title: Configure Azure Front Door Route
+description: This article shows how to configure a Route between your domains and origin groups.
++++ Last updated : 02/18/2021+++
+# Configure an Azure Front Door Standard/Premium Route
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article explains each of the settings used in creating an Azure Front Door (AFD) Route for an existing endpoint. After you've added a custom domain and origin to your existing Azure Front Door endpoint, you need to configure route to define the association between domains and origins to route the traffic between them.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+Before you can configure an Azure Front Door Route, you must have created at least one origin group and one custom domain within the current endpoint.
+
+To set up an origin group, see [Create a new Azure Front Door Standard/Premium origin group](how-to-create-origin.md).
+
+## Create a new Azure Front Door Standard/Premium Route
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
+
+1. Select **Endpoint Manager** under **Settings**.
+
+ :::image type="content" source="../media/how-to-configure-route/select-endpoint-manager.png" alt-text="Screenshot of Front Door Endpoint Manager settings." lightbox="../media/how-to-configure-route/select-endpoint-manager-expanded.png":::
+
+1. Then select **Edit Endpoint** for the endpoint you want to configure the Route.
+
+ :::image type="content" source="../media/how-to-configure-route/select-edit-endpoint.png" alt-text="Screenshot of selecting edit endpoint.":::
+
+1. The **Edit Endpoint** page will appears. Select **+ Add** for Routes.
+
+ :::image type="content" source="../media/how-to-configure-route/select-add-route.png" alt-text="Screenshot of add a route on edit endpoint page.":::
+
+1. On the **Add Route** page, enter, or select the following information.
+
+ :::image type="content" source="../media/how-to-configure-route/add-route-page.png" alt-text="Screenshot of add a route page." lightbox="../media/how-to-configure-route/add-route-page-expanded.png":::
+
+ | Setting | Value |
+ | | |
+ | Name | Enter a unique name for the new Route. |
+ | Domain| Select one or more domains that have been validated and isn't associated to another Route |
+ | Patterns to Match | Configure all URL path patterns that this route will accept. For example, you can set this to `/images/*` to accept all requests on the URL `www.contoso.com/images/*`. AFD will try to determine the traffic based on Exact Match first, if no exact match Paths, then look for a wildcard Path that matches. If no routing rules are found with a matching Path, then reject the request and return a 400: Bad Request error HTTP response. |
+ | Accepted protocols | Specify the protocols you want Azure Front Door to accept when the client is making the request. |
+ | Redirect | Specify whether HTTPS is enforced for the incoming request with HTTP request |
+ | Origin group | Select which origin group should be forwarded to when the back to origin request occurs. |
+ | Origin Path | Enter the path to the resources that you want to cache. To allow caching of any resource at the domain, leave this setting blank. |
+ | Forwarding protocol | Select the protocol used for forwarding request. |
+ | Caching | Select this option to enable caching of static content with Azure Front Door. |
+ | Rule | Select Rule Sets that will be applied to this Route. For more information about how to configure Rules, see [Configure a Rule Set for Azure Front Door](how-to-configure-rule-set.md) |
+
+1. Select **Add** to create the new Route. The Route will appear in the list of Routes for the endpoint.
+
+ :::image type="content" source="../media/how-to-configure-route/route-list-page.png" alt-text="Screenshot of routes list.":::
+
+## Clean up resources
+
+To delete a route when you no longer need it, select the Route and then select **Delete**.
++
+## Next steps
+To learn about custom domains, continue to the tutorial for adding a custom domain to your Azure Front Door endpoint.
+
+> [!div class="nextstepaction"]
+> [Add a custom domain]()
frontdoor How To Configure Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-configure-rule-set.md
+
+ Title: 'Azure Front Door: Configure Front Door Rule Set'
+description: This article provides guidance on how to configure a Rule Set.
++++ Last updated : 02/18/2021+++
+# Configure a Rule Set
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This tutorial shows how to create a Rule Set and your first set of rules in the Azure portal.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> - Configure Rule Set using the portal.
+> - Delete Rule Set from your AFD profile using the portal
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+* Before you can complete the steps in this tutorial, you must first create an Azure Front Door Standard/Premium. For more information, see [Quickstart: Create an Azure Front Door Standard/Premium profile](create-front-door-portal.md).
+
+## Configure Rule Set in Azure portal
+
+1. Within your Front Door profile, select **Rule Set** located under **Settings**. Select **Add** and give it a rule set name.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-create-rule-set-1.png" alt-text="Screenshot of rule set landing page.":::
+
+1. Select **Add Rule** to create your first rule. Give it a rule name. Then, select **Add condition** or **Add action** to define your rule. You can add up to 10 conditions and 5 actions for one rule. In this example, we use server variable to add a response header 8Geo-country* for requests that include *contoso* in the URL.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-create-rule-set.png" alt-text="Screenshot of rule set configuration page.":::
+
+ > [!NOTE]
+ > * To delete a condition or action from a rule, use the trash can on the right-hand side of the specific condition or action.
+ > * To create a rule that applies to all incoming traffic, do not specify any conditions.
+ > * To stop evaluating remaining rules if a specific rule is met, check **Stop evaluating remaining rule**. If this option is checked and all remaining rules in the Rule Set will not be executed regardless if the matching conditions were met.
+
+1. You can determine the priority of the rules within your Rule Set by using the arrow buttons to move the rules higher or lower in priority. The list is in ascending order, so the most important rule is listed first.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-rule-set-change-orders.png" alt-text="Screenshot of rule set priority." lightbox="../media/how-to-configure-rule-set/front-door-rule-set-change-orders-expanded.png":::
+
+1. Once you've created one or more rules select **Save** to complete the creation of your Rule Set.
+
+1. Now associate the Rule Set to a Route so it can take effect. You can associate the Rules Set through Rule Set page or you can go to Endpoint Manager to create the association.
+
+ **Rule Set page**:
+
+ 1. Select the Rule Set to be associated.
+
+ 1. Select the *Unassociated* link.
+
+
+ 1. Then in the **Associate a route** blade, select the endpoint and route you want to associate with the Rule Set.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set.png" alt-text="Screenshot of create a route page.":::
+
+ 1. Click *Next* to change rule set orders if there are multiple rule sets under selected route. Rule set will be executed from top to down. You can change orders by selecting the rule set and move it up or down.Then select *Associate*.
+
+ > [!Note]
+ > You can only associate one rule set with a single route on this page. To associate a Rule Set with multiple routes, please use Endpoint Manager.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-2.png" alt-text="Screenshot of rule set orders.":::
+
+ 1. The rule set is now associated with a route. You can look at the response header and see the Geo-country is added.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-3.png" alt-text="Screenshot of rule associated with a route.":::
+
+ **Endpoint Manager**:
+
+ 1. Go to Endpoint manager, select the endpoint you want to associate with the Rule Set.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-1.png" alt-text="Screenshot of selecting endpoint in Endpoint Manager." lightbox="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-1-expanded.png":::
+
+ 1. Click *Edit endpoint*
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-2.png" alt-text="Screenshot of selecting edit endpoint in Endpoint Manager." lightbox="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-2-expanded.png":::
+
+ 1. Click on the Route.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-3.png" alt-text="Screenshot of selecting a route.":::
+
+ 1. In the *Update route* blade, in *Rules*, select the Rule Sets you want to associate with the route from the dropdown. Then you can change orders by moving rule set up and down.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-associate-rule-set-endpoint-manager-4.png" alt-text="Screenshot of update a route page.":::
+
+ 1. Then select *Update* or *Add* to finish the association.
+
+## Delete a Rule Set from your Azure Front Door profile
+
+In the preceding steps, you configured and associated a Rule Set to your Route. If you no longer want the Rule Set associated to your Front Door, you can remove the Rule Set by completing the following steps:
+
+1. Go to the **Rule Set page** under **Settings** to disassociate the Rule Set from all associated routes.
+
+1. Expand to the Route, click on the three dots select *Edit the route*.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-1.png" alt-text="Screenshot of route expanded in rule set.":::
+
+1. Go to Rules section on the Route page, select the rule set, and select on the *Delete* button.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-2.png" alt-text="Screenshot of update route page to delete a rule set." lightbox="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-2-expanded.png":::
+
+1. Select *Update* and the Rule Set will disassociate from the route.
+
+1. Repeat steps 2-5 to disassociate other routes that are associated with this rule set until you see the Routes status shows *Unassociated*.
+
+1. For Rule Set that is *Unassociated*, you can delete the Rule Set by clicking on the three dots on the right and select *Delete*.
+
+ :::image type="content" source="../media/how-to-configure-rule-set/front-door-disassociate-rule-set-3.png" alt-text="Screenshot of how to delete a rule set.":::
+
+1. The rule set is now deleted.
+
+## Next steps
+
+In this tutorial, you learned how to:
+
+* Create a Rule set
+* Associate a rule set to your AFD route.
+* Delete a rule set from your AFD profile
+
+To learn how to add security headers with Rule Set, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Security headers with Rules Set]()
frontdoor How To Create Origin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-create-origin.md
+
+ Title: Set up an Azure Front Door Standard/Premium (Preview) Origin
+description: This article shows how to configure an origin with Endpoint Manager.
++++ Last updated : 02/18/2021+++
+# Set up an Azure Front Door Standard/Premium (Preview) Origin
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article will show you how to create an Azure Front Door Standard/Premium origin in an existing origin group.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+Before you can create an Azure Front Door Standard/Premium origin, you must have created at least one origin group.
+
+## Create a new Azure Front Door Standard/Premium Origin
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
+
+1. Select **Origin Group**. Then select **+ Add** to create a new origin group.
+
+ :::image type="content" source="../media/how-to-create-origin/select-add-origin.png" alt-text="Screenshot of origin group landing page.":::
+
+1. On the **Add an origin group** page, enter a unique **Name** for the new origin group.
+
+1. Then select **+ Add an Origin** to add a new origin to this origin group.
+
+ :::image type="content" source="../media/how-to-create-origin/add-origin-view.png" alt-text="Screenshot of add an origin page.":::
+
+ | Setting | Value |
+ | | |
+ | Name | Enter a unique name for the new Azure Front Door origin. |
+ | Origin Type | The type of resource you want to add. Azure Front Door Standard/Premium supports autodiscovery of your app origin from app service, cloud service, or storage. If you want a different resource in Azure or a non-Azure backend, select **Custom host**. |
+ | Host Name | If you didn't select **Custom host** for origin host type, select your backend by choosing the origin host name in the dropdown. |
+ | Origin Host Header | Enter the host header value being sent to the backend for each request. For more information, see [Origin host header](concept-origin.md#hostheader). |
+ | HTTP Port | Enter the value for the port that the origin supports for HTTP protocol. |
+ | HTTPS Port | Enter the value for the port that the origin supports for HTTPS protocol. |
+ | Priority | Assign priorities to your different origin when you want to use a primary service origin for all traffic. Also, provide backups if the primary or the backup origin is unavailable. For more information, see [Priority](concept-origin.md#priority). |
+ | Weight | Assign weights to your different origins to distribute traffic across a set of origins, either evenly or according to weight coefficients. For more information, see [Weights](concept-origin.md#weighted). |
+ | Status | Select this option to enable origin. |
+ | Rule | Select Rule Sets that will be applied to this Route. For more information about how to configure Rules, see [Configure a Rule Set for Azure Front Door](how-to-configure-rule-set.md) |
+
+ > [!IMPORTANT]
+ > During configuration, APIs don't validate if the origin is inaccessible from Front Door environments. Make sure that Front Door can reach your origin.
+
+1. Select **Add** to create the new origin. The created origin should appear in the origin list with the group
+
+ :::image type="content" source="../media/how-to-create-origin/add-origin-view.png" alt-text="Screenshot of add an origin page.":::
+
+1. Select **Add** to add the origin group to current endpoint. The origin group should appear within the Origin group panel.
+
+## Clean up resources
+To delete an Origin group when you no longer needed it, click the **...** and then select **Delete** from the drop-down.
++
+To delete an origin when you no longer need it, click the **...** and then select **Delete** from the drop-down.
++
+## Next steps
+
+To learn about custom domains, see [adding a custom domain](how-to-add-custom-domain.md) to your Azure Front Door Standard/Premium endpoint.
frontdoor How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-enable-private-link.md
+
+ Title: 'Connect Azure Front Door Premium to your origin with Private Link'
+description: Learn how to connect your Azure Front Door Premium to your origin with Private Link service by using the Azure portal.
++++ Last updated : 02/18/2021+
+# Customer intent: As someone with a basic network background who's new to Azure, I want to configure Front Door to connect to my origin via private link service by using Azure portal
++
+# Connect Azure Front Door Premium to your origin with Private Link
+
+This article will guide you through how to configure Azure Front Door Premium SKU to connect to your applications hosted in a virtual network using the Azure Private Link service.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers.
+
+## Sign in to the Azure portal
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Enable private endpoint in Azure Front Door service
+
+In this section, you'll map the Azure Private Link service to a private endpoint created in the Azure Front Door Premium SKU's private network.
+
+1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**.
+
+1. Select the origin group that contains the origin you want to enable Private Link for.
+
+1. Select **+ Add an origin** to add a new origin or select a previously created origin from the list. Then select the checkbox to **Enable private link service**.
+
+ :::image type="content" source="../media/how-to-enable-private-link/front-door-private-endpoint-private-link.png" alt-text="Screenshot of enabling private link in add an origin page.":::
+
+1. For **Select an Azure resource**, select **In my directory**. Select or enter the following setting to configure the resource you want Azure Front Door Premium to connect with privately.
+
+ | Setting | Value |
+ | - | -- |
+ | Region | Select the region that is the same or closest to your origin. |
+ | Resource type | Select **Microsoft.Network/privateLinkServices**. |
+ | Resource | Select **myPrivateLinkService**. |
+ | Target sub resource | Leave this field empty. |
+ | Request message | Customize message or choose the default message. |
+
+## Next Steps
+
+Learn about [Azure Front Door Premium Private Link](concept-private-link.md).
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-logs.md
+
+ Title: 'Azure Front Door Standard/Premium (Preview) Logging'
+description: This article explains how logging works in Azure Front Door Standard/Premium.
++++ Last updated : 02/18/2020+++
+# Azure Front Door Standard/Premium (Preview) Logging
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door provides different logging to help you track, monitor, and debug your Front Door.
+
+* Access logs have detailed information about every request that AFD receives and help you analyze and monitor access patterns, and debug issues.
+* Activity logs provide visibility into the operations done on Azure resources.
+* Health Probe logs provides the logs for every failed probe to your origin.
+* Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Access Logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes.
+
+You have three options for storing your logs:
+
+* **Storage account:** Storage accounts are best used for scenarios when logs are stored for a longer duration and reviewed when needed.
+* **Event hubs:** Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores. For example: Splunk/DataDog/Sumo.
+* **Azure Log Analytics:** Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance.
+
+## Configure Logs
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for Azure Front Door Standard/Premium and select the Azure Front Door profile.
+
+1. In the profile, go to **Monitoring**, select **Diagnostic Setting**. Select **Add diagnostic setting**.
+
+ :::image type="content" source="../media/how-to-logging/front-door-logging-1.png" alt-text="Screenshot of diagnostic settings landing page.":::
+
+1. Under **Diagnostic settings**, enter a name forΓÇ»**Diagnostic settings name**.
+
+1. Select theΓÇ»**log** from **FrontDoorAccessLog**, **FrontDoorHealthProbeLog**, and **FrontDoorWebApplicationFirewallLog**.
+
+1. Select theΓÇ»**Destination details**. Destination options are:
+
+ * **Send to Log Analytics**
+ * Select the *Subscription* and *Log Analytics workspace*.
+ * **Archive to a storage account**
+ * Select the *Subscription* and the *Storage Account*. and set **Retention (days)**.
+ * **Stream to an event hub**
+ * Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*.
+
+ :::image type="content" source="../media/how-to-logging/front-door-logging-2.png" alt-text="Screenshot of diagnostic settings page.":::
+
+1. Click on **Save**.
+
+## Access Log
+
+Azure Front Door currently provides individual API requests with each entry having the following schema and logged in JSON format as shown below.
+
+| Property | Description |
+|-|-|
+| TrackingReference | The unique reference string that identifies a request served by AFD, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. |
+| Time | The date and time when the AFD edge delivered requested contents to client (in UTC). |
+| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. |
+| HttpVersion | The HTTP version that the viewer specified in the request. |
+| RequestUri | URI of the received request. This field is a full scheme, port, domain, path, and query string |
+| HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (*.contoso.com), hostname is a.contoso.com. if you use Azure Front Door domain (contoso.azurefd.net), hostname is contoso.azurefd.net. |
+| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. The number of bytes of data that the viewer included in the request, including headers. |
+| ResponseBytes | Bytes sent by the backend server as the response. |
+| UserAgent | The browser type that the client used. |
+| ClientIp | The IP address of the client that made the original request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. |
+| SocketIp | The IP address of the direct connection to AFD edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. |
+| Latency | The length of time from the time AFD edge server receives a client's request to the time that AFD sends the last byte of response to client, in milliseconds. This field doesn't take into account network latency and TCP buffering. |
+| RequestProtocol | The protocol that the client specified in the request: HTTP, HTTPS. |
+| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 |
+| SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. |
+| Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net |
+| HttpStatusCode | The HTTP status code returned from AFD. |
+| Pop | The edge pop, which responded to the user request. |
+| Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are HIT: The HTTP request was served from AFD edge POP cache. <br> **MISS**: The HTTP request was served from origin. <br/> **PARTIAL_HIT**: Some of the bytes from a request got served from AFD edge POP cache while some of the bytes got served from origin for object chunking scenarios. <br> **CACHE_NOCONFIG**: Forwarding requests without caching settings, including bypass scenario. <br/> **PRIVATE_NOSTORE**: No cache configured in caching settings by customers. <br> **REMOTE_HIT**: The request was served by parent node cache. <br/> **N/A**:** Request that was denied by Signed URL and Rules Set. |
+| MatchedRulesSetName | The names of the rules that were processed. |
+| RouteNameΓÇ»| The name of the route that the request matched. |
+| ClientPort | The IP port of the client that made the request. |
+| Referrer | The URL of the site that originated the request. |
+| TimetoFirstByte | The length of time in milliseconds from AFD receives the request to the time the first byte gets sent to client, as measured on Azure Front Door. This property doesn't measure the client data. |
+| ErrorInfo | This field provides detailed info of the error token for each response. <br> **NoError**: Indicates no error was found. <br> **CertificateError**: Generic SSL certificate error. <br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. <br> **ClientDisconnected**: Request failure because of client network connection. <br> **ClientGeoBlocked**: The client was blocked due geographical location of the IP. <br> **UnspecifiedClientError**: Generic client error. <br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. <br> **DNSFailure**: DNS Failure. <br> **DNSTimeout**: The DNS query to resolve the backend timed out. <br> **DNSNameNotResolved**: The server name or address couldn't be resolved. <br> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. <br> **OriginConnectionError**: Generic origin connection error. <br> **OriginConnectionRefused**: The connection with the origin wasn't established. <br> **OriginError**: Generic origin error. <br> **OriginInvalidRequest**: An invalid request was sent to the origin. <br> **ResponseHeaderTooBig**: The origin returned a too large of a response header. <br> **OriginInvalidResponse**:** Origin returned an invalid or unrecognized response. <br> **OriginTimeout**: The timeout period for origin request expired. <br> **ResponseHeaderTooBig**: The origin returned a too large of a response header. <br> **RestrictedIP**: The request was blocked because of restricted IP. <br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. <br> **SSLInvalidRootCA**: The RootCA was invalid. <br> **SSLInvalidCipher**: Cipher was invalid for which the HTTPS connection was established. <br> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. <br> **OriginConnectionRefused**: The connection with the origin wasn't established. <br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. |
+| OriginURL | The full URL of the origin where requests are being sent. Composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If there's a URL rewrite rule in Rule Set, path refers to rewritten path. <br> **Cache on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
+| OriginIP | The origin IP that served the request. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
+| OriginName| The full DNS name (hostname in origin URL) to the origin. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see Object Chunking for more details. |
+
+## Health Probe Log
+
+Health probe logs provide logging for every failed probe to help you diagnose your origin. The logs will provide you information that you can use to bring the origin back to service. Some scenarios this log can be useful for are:
+
+* You noticed Azure Front Door traffic was sent to some of the origins. For example, only three out of four origins receiving traffic. You want to know if the origins are receiving probes and if not the reason for the failure. 
+
+* You noticed the origin health % is lower than expected and want to know which origin failed and the reason of the failure.
+
+### Health Probe Log Properties
+
+Each health probe log has the following schema.
+
+| Property | Description |
+| | |
+| HealthProbeId | A unique ID to identify the request. |
+| Time | Probe complete time |
+| HttpMethod | HTTP method used by the health probe request. Values include GET and HEAD, based on health probe configurations. |
+| Result | Status of health probe to origin, value includes success, and other error text. |
+| HttpStatusCode | The HTTP status code returned from the origin. |
+| ProbeURL (target) | The full URL of the origin where requests are being sent. Composed of the scheme, host header, path, and query string. |
+| OriginName | The origin where requests are being sent. This field helps locate origins of interest if origin is configured to FDQN. |
+| POP | The edge pop, which sent out the probe request. |
+| Origin IP | Target origin IP. This field is useful in locating origins of interest if you configure origin using FDQN. |
+| TotolaLatency | The time from AFDX edge sends the request to origin to the time origin sends the last response to AFDX edge. |
+| ConnectionLatency| Duration Time spent on setting up the TCP connection to send the HTTP Probe request to origin. |
+| DNSResolution Latency | Duration Time spent on DNS resolution if the origin is configured to be an FDQN instead of IP. N/A if the origin is configured to IP. |
+
+### Health Probe Log Sample in JSON
+
+`{ "records": [ { "time": "2021-02-02T07:15:37.3640748Z",
+ "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE",
+ "category": "FrontDoorHealthProbeLog",
+ "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write",
+ "properties": { "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E",
+ "POP": "MAA",
+ "httpVerb": "HEAD",
+ "result": "OriginError",
+ "httpStatusCode": "400",
+ "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/",
+ "originName": "afdxprivatepreview.blob.core.windows.net",
+ "originIP": "52.239.224.228:80",
+ "totalLatencyMilliseconds": "141",
+ "connectionLatencyMilliseconds": "68",
+ "DNSLatencyMicroseconds": "1814" } } ]
+} `
+
+## Activity Logs
+
+Activity logs provide information about the operations done on Azure Front Door Standard/Premium. The logs include details about what, who and when a write operation was done on Azure Front Door.
+
+> [!NOTE]
+> Activity logs don't include GET operations. They also don't include operations that you perform by using either the Azure portal or the original Management API.
+
+Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor.
+
+To view activity logs:
+
+1. Select your Front Door profile.
+
+1. Select **Activity log.**
+
+1. Choose a filtering scope and then select **Apply**.
+
+## Next steps
+
+- Learn about [Azure Front Door Standard/Premium (Preview) Reports](how-to-reports.md).
+- Learn about [Azure Front Door Standard/Premium (Preview) real time monitoring metrics](how-to-monitor-metrics.md).
frontdoor How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-monitor-metrics.md
+
+ Title: Monitoring metrics for Azure Front Door Standard/Premium
+description: This article describes the Azure Front Door Standard/Premium monitoring metrics.
+++++ Last updated : 02/18/2021+++
+# Real-time Monitoring in Azure Front Door Standard/Premium
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door Standard/Premium is integrated with Azure Monitor and has 11 metrics to help monitor Azure Front Door Standard/Premium in real-time to track, troubleshoot, and debug issues.
+
+Azure Front Door Standard/Premium measures and sends its metrics in 60-second intervals. The metrics can take up to 3 mins to appear in the portal. Metrics can be displayed in charts or grid of your choice and are accessible via portal, PowerShell, CLI, and API. For more information, seeΓÇ»[Azure Monitor metrics](../../azure-monitor/platform/data-platform-metrics.md).
+
+The default metrics are free of charge. You can enable additional metrics for an extra cost.
+
+You can configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/platform/alerts-metric.md).
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Metrics supported in Azure Front Door Standard/Premium
+
+| Metrics | Description | Dimensions |
+| - | - | - |
+| Bytes Hit ratio | The percentage of egress from AFD cache, computed against the total egress.ΓÇ»</br> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. </br> **Scenarios excluded in bytes hit ratio calculation**:</br> 1. You explicitly configure no cache either through Rules Engine or Query String caching behavior. </br> 2. You explicitly configure cache-control directive with no-store or private cache. </br>3. Byte hit ratio can be low if most of the traffic is forwarded to origin rather than served from caching based on your configurations or scenarios. | Endpoint |
+| RequestCount | The number of client requests served by CDN. | Endpoint, client country, client region, HTTP status, HTTP status group |
+| ResponseSize | The number of client requests served by AFD. |Endpoint, client country, client region, HTTP status, HTTP status group |
+| TotalLatency | The total time from the client request received by CDN **until the last response byte send from CDN to client**. |Endpoint, client country, client region, HTTP status, HTTP status group |
+| RequestSize | The number of bytes sent as requests from clients to AFD. | Endpoint, client country, client region, HTTP status, HTTP status group |
+| 4XX % ErrorRate | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region |
+| 5XX % ErrorRate | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region |
+| OriginRequestCount | The number of requests sent from AFD to origin | Endpoint, Origin, HTTP status, HTTP status group |
+| OriginLatency | The time calculated from when the request was sent by AFD edge to the backend until AFD received the last response byte from the backend. | Endpoint, Origin |
+| OriginHealth% | The percentage of successful health probes from AFD to origin.| Origin, Origin Group |
+| WAF request count | Matched WAF request. | Action, rule name, Policy Name |
+
+## Access Metrics in Azure portal
+
+1. From the Azure portal menu, select **All Resources** >> **\<your-AFD Standard/Premium (Preview) -profile>**.
+
+2. Under **Monitoring**, select **Metrics**:
+
+3. In **Metrics**, select the metric to add:
+
+ :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-1.png" alt-text="Screenshot of metrics page." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-1-expanded.png":::
+
+4. Select **Add filter** to add a filter:
+
+ :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-2.png" alt-text="Screenshot of adding filters to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-2-expanded.png":::
+
+5. Select **Apply splitting** to split data by different dimensions:
+
+ :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-4.png" alt-text="Screenshot of adding dimensions to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-4-expanded.png":::
+
+6. Select **New chart** to add a new chart:
+
+## Configure Alerts in Azure portal
+
+1. Set up alerts on Azure Front Door Standard/Premium (Preview) by selecting **Monitoring** >> **Alerts**.
+
+1. Select **New alert rule** for metrics listed in Metrics section.
+
+Alert will be charged based on Azure Monitor. For more information about alerts, see [Azure Monitor alerts](../../azure-monitor/platform/alerts-overview.md).
+
+## Next steps
+
+- Learn about [Azure Front Door Standard/Premium Reports](how-to-reports.md).
+- Learn about [Azure Front Door Standard/Premium Logs](how-to-logs.md).
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/how-to-reports.md
+
+ Title: 'Azure Front Door Standard/Premium (Preview) Reports'
+description: This article explains how reporting works in Azure Front Door.
++++ Last updated : 02/18/2021+++
+# Azure Front Door Standard/Premium (Preview) Reports
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure Front Door Standard/Premium Analytics Reports provide a built-in and all-around view of how you Azure Front Door behaves along with associated Web Application Firewall metrics. You can also take advantage of Access Logs to do further troubleshooting and debugging. Azure Front Door Analytics reports include traffic reports and security reports.
+
+| Reports | Details |
+|||
+| Overview of key metrics | Shows overall data that got sent from Azure Front Door edges to clients<br/>- Peak bandwidth<br/>- Requests <br/>- Cache hit ratio<br/> - Total latency<br/>- 5XX error rate |
+| Traffic by Domain | - Provides an overview of all the domains under the profile<br/>- Breakdown of data transferred out from AFD edge to client<br/>- Total requests<br/>- 3XX/4XX/5XX response code by domains |
+| Traffic by Location | - Shows a map view of request and usage by top countries<br/>- Trend view of top countries |
+| Usage | - Displays data transfer out from Azure Front Door edge to clients<br/>- Data transfer out from origin to AFD edge<br/>- Bandwidth from AFD edge to clients<br/>- Bandwidth from origin to AFD edge<br/>- Requests<br/>- Total latency<br/>- Request count trend by HTTP status code |
+| Caching | - Shows cache hit ratio by request count<br/>- Trend view of hit and miss requests |
+| Top URL | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the most requested 50 assets. |
+| Top Referrer | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 referrers that generate traffic. |
+| Top User Agent | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 user agents that were used to request content. |
+
+| Security reports | Details |
+|||
+| Overview of key metrics | - Shows matched WAF rules<br/>- Matched OWASP rules<br/>- Matched BOT rules<br/>- Matched custom rules |
+| Metrics by dimensions | - Breakdown of matched WAF rules trend by action<br/>- Doughnut chart of events by Rule Set Type and event by rule group<br/>- Break down list of top events by rule ID, country, IP address, URL, and user agent |
+
+> [!NOTE]
+> Security reports is only available with Azure Front Door Premium SKU.
+
+Most of the reports are based on access logs and are offered free of charge to customers on Azure Front Door. Customer doesnΓÇÖt have to enable access logs or do any configuration to view these reports. Reports are accessible through portal and API. CSV download is also supported.
+
+Reports support any selected date range from the previous 90 days. With data points of every 5 mins, every hour, or every day based on the date range selected. Normally, you can view data with delay of within an hour and occasionally with delay of up to a few hours.
+
+## Access Reports using the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your Azure Front Door Standard/Premium profile.
+
+1. In the navigation pane, select **Reports or Security** under *Analytics*.
+
+ :::image type="content" source="../media/how-to-reports/front-door-reports-landing-page.png" alt-text="Screenshot of Reports landing page":::
+
+1. There are seven tabs for different dimensions, select the dimension of interest.
+
+ * Traffic by domain
+ * Usage
+ * Traffic by location
+ * Cache
+ * Top url
+ * Top referrer
+ * Top user agent
+
+1. After choosing the dimension, you can select different filters.
+
+ 1. **Show data for** - Select the date range for which you want to view traffic by domain. Available ranges are:
+
+ * Last 24 hours
+ * Last 7 days
+ * Last 30 days
+ * Last 90 days
+ * This month
+ * Last month
+ * Custom date
+
+ By default, data is shown for last seven days. For tabs with line charts, the data granularity goes with the date ranges you selected as the default behavior.
+
+ * 5 minutes - one data point every 5 minutes for date ranges less than or equal 24 hours.
+ * By hour ΓÇô one data every hour for date ranges between 24 hours to 30 days
+ * By day ΓÇô one data per day for date ranges bigger than 30 days.
+
+ You can always use Aggregation to change the default aggregation granularity. Note: 5 minutes doesnΓÇÖt work for data range longer than 14 days.
+
+ 1. **Location** - Select single or multiple client locations by country. Countries are grouped into six regions: North America, Asia, Europe, Africa, Oceania, and South America. Refer to [region/country mapping](https://en.wikipedia.org/wiki/Subregion). By default, all countries are selected.
+
+ :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-locations.png" alt-text="Screenshot of Reports for location dimension.":::
+
+ 1. **Protocol** - Select either HTTP or HTTPS to view traffic data.
+
+ :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-protocol.png" alt-text="Screenshot of Reports for protocol dimension.":::
+
+ 1. **Domains** - Select single or multi Endpoints or Custom Domains. By default, all endpoints and custom domains are selected.
+
+ * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile. The endpoint will be considered a second endpoint.
+ * If you're viewing reports by custom domain - when you delete one custom domain and bind it to a different endpoint. They'll be treated as one custom domain. If view by endpoint - they'll be treated as separate items.
+
+ :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-domain.png" alt-text="Screenshot of Reports for domain dimension.":::
+
+1. If you want to export the data to a CSV file, select the *Download CSV* link on the selected tab.
+
+ :::image type="content" source="../media/how-to-reports/front-door-reports-download-csv.png" alt-text="Screenshot of download csv file for Reports.":::
+
+### Key metrics for all reports
+
+| Metric | Description |
+|||
+| Data Transferred | Shows data transferred from AFD edge POPs to client for the selected time frame, client locations, domains, and protocols. |
+| Peak Bandwidth | Peak bandwidth usage in bits per seconds from Azure Front Door edge POPs to client for the selected time frame, client locations, domains, and protocols. |
+| Total Requests | The number of requests that AFD edge POPs responded to client for the selected time frame, client locations, domains, and protocols. |
+| Cache Hit Ratio | The percentage of all the cacheable requests for which AFD served the contents from its edge caches for the selected time frame, client locations, domains, and protocols. |
+| 5XX Error Rate | The percentage of requests for which the HTTP status code to client was a 5XX for the selected time frame, client locations, domains, and protocols. |
+| Total Latency | Average latency of all the requests for the selected time frame, client locations, domains, and protocols. The latency for each request is measured as the total time of when the client request gets received by Azure Front Door until the last response byte sent from Azure Front Door to client. |
+
+## Traffic by Domain
+
+Traffic by Domain provides a grid view of all the domains under this Azure Front Door profile. In this report you can view:
+* Requests
+* Data transferred out from Azure Front Door to client
+* Requests with status code (3XX, 4Xx and 5XX) of each domain
+
+Domains include Endpoint and Custom Domains, as explained in the Accessing Report session.
+
+You can go to other tabs to investigate further or view access log for more information if you find the metrics below your expectation.
+++
+## Usage
+
+This report shows the trends of traffic and response status code by different dimensions, including:
+
+* Data Transferred from edge to client and from origin to edge in line chart.
+
+* Data Transferred from edge to client by protocol in line chart.
+
+* Number of requests from edge to clients in line chart.
+
+* Number of requests from edge to clients by protocol, HTTP and HTTPS, in line chart.
+
+* Bandwidth from edge to client in line chart.
+
+* Total latency, which measures the total time from the client request received by Front Door until the last response byte sent from Front Door to client.
+
+* Number of requests from edge to clients by HTTP status code, in line chart. Every request generates an HTTP status code. HTTP status code appears in HTTPStatusCode in Raw Log. The status code describes how CDN edge handled the request. For example, a 2xx status code indicates that the request got successfully served to a client. While a 4xx status code indicates that an error occurred. For more information about HTTP status codes, see List of HTTP status codes.
+
+* Number of requests from the edge to clients by HTTP status code. Percentage of requests by HTTP status code among all requests in grid.
++
+## Traffic by Location
+
+This report displays the top 50 locations by the country of the visitors that access your asset the most. The report also provides a breakdown of metrics by country and gives you an overall view of countries where the most traffic gets generated. Lastly you can see which country is having higher cache hit ratio or 4XX/5XX error codes.
++
+The following are included in the reports:
+
+* A world map view of the top 50 countries by data transferred out or requests of your choice.
+* Two line charts trend view of the top five countries by data transferred out and requests of your choice.
+* A grid of the top countries with corresponding data transferred out from AFD to clients, data transferred out % of all countries, requests, request % among all countries, cache hit ratio, 4XX response code and 5XX response code.
+
+## Caching
+
+Caching reports provides a chart view of cache hits/misses and cache hit ratio based on requests. These key metrics explain how CDN is caching contents since the fastest performance results from cache hits. You can optimize data delivery speeds by minimizing cache misses. This report includes:
+
+* Cache hit and miss count trend, in line chart.
+
+* Cache hit ratio in line chart.
+
+Cache Hits/Misses describe the request number cache hits and cache misses for client requests.
+
+* Hits: the client requests that are served directly from Azure CDN edge servers. Refers to those requests whose values for CacheStatus in raw logs are HIT, PARTIAL_HIT, or REMOTE HIT.
+
+* Miss: the client requests that are served by Azure CDN edge servers fetching contents from origin. Refers to those requests whose values for the field CacheStatus in raw logs are MISS.
+
+**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`.
+
+This report takes caching scenarios into consideration and requests that met the following requirements are taken into calculation.
+
+* The requested content was cached on the POP closest to the requester or origin shield.
+
+* Partial cached contents for object chunking.
+
+It excludes all of the following cases:
+
+* Requests that are denied because of Rules Set.
+
+* Requests that contain matching Rules Set that has been set to disabled cache.
+
+* Requests that are blocked by WAF.
+
+* Origin response headers indicate that they shouldn't be cached. For example, Cache-Control: private, Cache-Control: no-cache, or Pragma: no-cache headers will prevent an asset from being cached.
++
+## Top URLs
+
+Top URLs allow you to view the amount of traffic incurred over a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days. Popular URLs will be displayed with the following values. User can sort URLs by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected. URL refers to the value of RequestUri in access log.
++
+* URL, refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`.
+* Request counts.
+* Request % of the total requests served by Azure Front Door.
+* Data transferred.
+* Data transferred %.
+* Cache Hit Ratio %
+* Requests with response code as 4XX
+* Requests with response code as 5XX
+
+> [!NOTE]
+> Top URLs may change over time and to get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 500 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations.
+>
+> The top 50 URLs may rise and fall in the list, but they rarely disappear from the list, so the numbers for top URLs are usually reliable. When a URL drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the URL that appear in that period.
+>
+> The same logic applies to Top User Agent.
+
+## Top Referrers
+
+Top Referrers allow customers to view the top 50 referrer that originated the most requests to the contents on a particular endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, http(s)://contoso.com/https://docsupdatetracker.net/index.html) directly into the address line of a browser, the referrer for the requested is "Empty". Top referrers report includes the follow values. You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
+
+* Referrer, the value of Referrer in raw logs
+* Request counts
+* Request % of total requests served by Azure CDN in the selected time period.
+* Data transferred
+* Data transferred %
+* Cache Hit Ratio %
+* Requests with response code as 4XX
+* Requests with response code as 5XX
++
+## Top User Agent
+
+This report allows you to have graphical and statistics view of the top 50 user agents that were used to request content. For example,
+* Mozilla/5.0 (Windows NT 10.0; WOW64)
+* AppleWebKit/537.36 (KHTML, like Gecko)
+* Chrome/86.0.4240.75
+* Safari/537.36.
+
+A grid displays the request counts, request %, data transferred and data transferred, cache Hit Ratio %, requests with response code as 4XX and requests with response code as 5XX. User Agent refers to the value of UserAgent in access logs.
+
+## Security Report
+
+This report allows you to have graphical and statistics view of WAF patterns by different dimensions.
+
+| Dimensions | Description |
+|||
+| Overview metrics- Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot manager. |
+| Overview metrics- Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. |
+| Overview metrics- Matched Managed Rules | Four line-charts trend for requests that are Block, Log, Allow and Redirect. |
+| Overview metrics- Matched Custom Rule | Requests that match custom WAF rules. |
+| Overview metrics- Matched Bot Rule | Requests that match Bot Manager. |
+| WAF request trend by action | Four line-charts trend for requests that are Block, Log, Allow and Redirect. |
+| Events by Rule Type | Doughnut chart of the WAF requests distribution by Rule Type, e.g. Bot, custom rules and managed rules. |
+| Events by Rule Group | Doughnut chart of the WAF requests distribution by Rule Group. |
+| Requests by actions | A table of requests by actions, in descending order. |
+| Requests by top Rule IDs | A table of requests by top 50 rule IDs, in descending order. |
+| Requests by top countries | A table of requests by top 50 countries, in descending order. |
+| Requests by top client IPs | A table of requests by top 50 IPs, in descending order. |
+| Requests by top Request URL | A table of requests by top 50 URLs, in descending order. |
+| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. |
+| Requests by top user agents | A table of requests by top 50 user agents, in descending order. |
+
+## CVS format
+
+You can download CSV files for different tabs in reports. This section describes the values in each CSV file.
+
+### General information about the CVS report
+
+Every CSV report includes some general information and the information is available in all CSV files. with variables based on the report you download.
++
+| Value | Description |
+|||
+| Report | The name of the report. |
+| Domains | The list of the endpoints or custom domains for the report. |
+| StartDateUTC | The start of the date range for which you generated the report, in Coordinated Universal Time (UTC) |
+| EndDateUTC | The end of the date range for which you generated the report, in Coordinated Universal Time (UTC) |
+| GeneratedTimeUTC | The date and time when you generated the report, in Coordinated Universal Time (UTC) |
+| Location | The list of the countries where the client requests originated. The value is ALL by default. Not applicable to Security report. |
+| Protocol | The protocol of the request, HTTP, or HTTPs. Not applicable to Top URL and Traffic by User Agent in Reports and Security report. |
+| Aggregation | The granularity of data aggregation in each row, every 5 minutes, every hour, and every day. Not applicable to Traffic by Domain, Top URL, and Traffic by User Agent in Reports and Security report. |
+
+### Data in Traffic by Domain
+
+* Domain
+* Total Request
+* Cache Hit Ratio
+* 3XX Requests
+* 4XX Requests
+* 5XX Requests
+* ByteTransferredFromEdgeToClient
+
+### Data in Traffic by Location
+
+* Location
+* TotalRequests
+* Request%
+* BytesTransferredFromEdgeToClient
+
+### Data in Usage
+
+There are three reports in this CSV file. One for HTTP protocol, one for HTTPS protocol and one for HTTP Status Code.
+
+Reports for HTTP and HTTPs share the same data set.
+
+* Time
+* Protocol
+* DataTransferred(bytes)
+* TotalRequest
+* bpsFromEdgeToClient
+* 2XXRequest
+* 3XXRequest
+* 4XXRequest
+* 5XXRequest
+
+Report for HTTP Status Code.
+
+* Time
+* DataTransferred(bytes)
+* TotalRequest
+* bpsFromEdgeToClient
+* 2XXRequest
+* 3XXRequest
+* 4XXRequest
+* 5XXRequest
+
+### Data in Caching
+
+* Time
+* CacheHitRatio
+* HitRequests
+* MissRequests
+
+### Data in Top URL
+
+* URL
+* TotalRequests
+* Request%
+* DataTransferred(bytes)
+* DataTransferred%
+
+### Data in User Agent
+
+* UserAgent
+* TotalRequests
+* Request%
+* DataTransferred(bytes)
+* DataTransferred%
+
+### Security Report
+
+There are seven tables all with the same fields below.
+
+* BlockedRequests
+* AllowedRequests
+* LoggedRequests
+* RedirectedRequests
+* OWASPRuleRequests
+* CustomRuleRequests
+* BotRequests
+
+The seven tables are for time, rule ID, country, IP address, URL, hostname, user agent.
+
+## Next steps
+
+Learn about [Azure Front Door Standard/Premium real time monitoring metrics](how-to-monitor-metrics.md).
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/overview.md
+
+ Title: Azure Front Door Standard/Premium| Microsoft Docs
+description: This article provides an overview of Azure Front Door Standard/Premium.
+++++ Last updated : 02/18/2021+++
+# What is Azure Front Door Standard/Premium (Preview)?
+
+> [!IMPORTANT]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [Azure Front Door Docs](../front-door-overview.md).
+
+Azure Front Door Standard/Premium is a fast, reliable, and secure modern cloud CDN that uses the Microsoft global edge network and integrates with intelligent threat protection. It combines the capabilities of Azure CDN Standard from Microsoft, Azure Front Door, Azure Web Application Firewall (WAF) into a single secure cloud CDN platform.
+
+With Azure Front Door Standard/Premium, you can transform your global consumer and enterprise applications into secure and high-performing personalized modern applications with contents that reach a global audience with low latency.
+
+ :::image type="content" source="../media/overview/front-door-overview.png" alt-text="Azure Front Door Standard/Premium architecture" lightbox="../media/overview/front-door-overview-expanded.png":::
+
+Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your routing method you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application backend is any Internet-facing service hosted inside or outside of Azure. AzureFront Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
+
+Azure Front Door also protect your app at the edges with Web Application Firewall, Bot Protection, and built-in lay 3/layer 4 DDoS Protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
+
+>[!NOTE]
+> Azure provides a suite of fully managed load-balancing solutions for your scenarios.
+>
+> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../../traffic-manager/traffic-manager-overview.md).
+> * If you want to load balance between your servers in a region at the application layer, review [Application Gateway](../../application-gateway/overview.md)
+> * To do network layer load balancing, review [Load Balancer](../../load-balancer/load-balancer-overview.md).
+>
+> Your end-to-end scenarios may benefit from combining these solutions as needed.
+> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Why use Azure Front Door Standard/Premium (Preview)?
+
+Azure Front Door Standard/Premium provides a single unified platform for static content and dynamic application acceleration, with enhanced security capabilities. Front Door also enables you to define, manage, and monitor the global routing for your app.
+
+Key features included with Azure Front Door Standard/Premium (Preview):
+
+- Accelerated application performance by using split TCP-based anycast protocol.
+
+- Intelligent **[health probe](concept-health-probes.md)** monitoring and load balancing among **[origins](concept-origin.md)**.
+
+- Define your own custom domain with flexible domain validation.
+
+- Application security with integrated [Web Application Firewall (WAF)](../../web-application-firewall/afds/afds-overview.md).
+
+- SSL offload and integrated certificate management.
+
+- Secure your origins with **[Private Link](concept-private-link.md)**.
+
+- Customizable traffic routing and optimizations via **[Rule Set](concept-rule-set.md)**.
+
+- **[Built-in reports](how-to-reports.md)** with all-in-one dashboard for both Front Door and security patterns.
+
+- **[Real-time monitoring](how-to-monitor-metrics.md)** and alerts that integrate with Azure Monitoring.
+
+- **[Logging](how-to-logs.md)** for each Front Door request and failed health probes.
+
+- Native support of end-to-end IPv6 connectivity and HTTP/2 protocol.
+
+## Pricing
+
+Azure Front Door Standard/Premium has two SKUs, Standard and Premium. See [Tier Comparison](tier-comparison.md). For pricing information, see [Front Door Pricing](https://azure.microsoft.com/pricing/details/frontdoor/).
+
+## What's new?
+
+Subscribe to the RSS feed and view the latest Azure Front Door feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=Azure%20Front%20Door) page.
+
+## Next steps
+
+* Learn how to [create a Front Door](create-front-door-portal.md).
frontdoor Tier Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/tier-comparison.md
+
+ Title: Azure Front Door Standard/Premium SKU comparison
+description: This article provides an overview of Azure Front Door Standard and Premium SKU and feature differences between them.
+++++ Last updated : 02/18/2021+++
+# Overview of Azure Front Door Standard/Premium SKU (Preview)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+Azure Front Door is offered for 3 different SKUs, [Azure Front Door](../front-door-overview.md), Azure Front Door Standard (Preview), and Azure Front Door Premium (Preview). Azure Front Door Standard/Premium SKUs combines capabilities of Azure Front Door, Azure CDN Standard from Microsoft, Azure WAF into a single secure cloud CDN platform with intelligent threat protection.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+* **Azure Front Door Standard SKU** is:
+
+ * Content delivery optimized
+ * Offering both static and dynamic content acceleration
+ * Global load balancing
+ * SSL offload
+ * Domain and certificate management
+ * Enhanced traffic analytics
+ * Basic security capabilities
+
+* **Azure Front Door Premium SKU** builds on capabilities of Standard SKU, and adds:
+
+ * Extensive security capabilities across WAF
+ * BOT protection
+ * Private Link support
+ * Integration with Microsoft Threat Intelligence and security analytics.
+
+![Diagram showing a comparison between Front Door SKUs.](../media/tier-comparison/tier-comparison.png)
+
+## Feature comparison
+
+| Feature | Standard | Premium |
+|-|:-:|:|
+| Custom domains | Yes | Yes |
+| SSL Offload | Yes | Yes |
+| Caching | Yes | Yes |
+| Compression | Yes | Yes |
+| Global load balancing | Yes | Yes |
+| Layer 7 routing | Yes | Yes |
+| URL rewrite | Yes | Yes |
+| Rules Engine | Yes | Yes |
+| Private Origin (Private Link) | No | Yes |
+| WAF | No | Yes |
+| Bot Protection | No | Yes |
+| Enhanced Metrics and diagnostics | Yes | Yes |
+| Traffic reports | Yes | Yes |
+| Security Report | No | Yes |
+
+## Next steps
+
+Learn how to [create a Front Door](create-front-door-portal.md)
frontdoor Troubleshoot Allowed Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/troubleshoot-allowed-certificate-authority.md
+
+ Title: 'Allowed certificate authorities for Azure Front Door Standard/Premium (Preview)'
+description: This article lists all the certificate authorities allowed when you create your own certificate.
++++ Last updated : 02/18/2021+++
+# Allowed certificate authorities for Azure Front Door Standard/Premium (Preview)
+
+When you enable the HTTPS feature using your own certificate for an Azure Front Door Standard/Premium custom domain. You need an allowed certificate authority (CA) to create your TLS/SSL certificate. Otherwise, if you use a non-allowed CA or a self-signed certificate, your request will be rejected.
+
+The following CAs are allowed when you create your own certificate:
+
+- AddTrust External CA Root
+- AlphaSSL Root CA
+- AME Infra CA 01
+- AME Infra CA 02
+- Ameroot
+- APCA-DM3P
+- Atos TrustedRoot 2011
+- Autopilot Root CA
+- Baltimore CyberTrust Root
+- Class 3 Public Primary Certification Authority
+- COMODO RSA Certification Authority
+- COMODO RSA Domain Validation Secure Server CA
+- D-TRUST Root Class 3 CA 2 2009
+- DigiCert Cloud Services CA-1
+- DigiCert Global Root CA
+- DigiCert Global Root G2
+- DigiCert High Assurance CA-3
+- DigiCert High Assurance EV Root CA
+- DigiCert SHA2 Extended Validation Server CA
+- DigiCert SHA2 High Assurance Server CA
+- DigiCert SHA2 Secure Server CA
+- DST Root CA X3
+- D-trust Root Class 3 CA 2 2009
+- Encryption Everywhere DV TLS CA
+- Entrust Root Certification Authority
+- Entrust Root Certification Authority - G2
+- Entrust.net Certification Authority (2048)
+- GeoTrust Global CA
+- GeoTrust Primary Certification Authority
+- GeoTrust Primary Certification Authority - G2
+- Geotrust RSA CA 2018
+- GlobalSign
+- GlobalSign Extended Validation CA - SHA256 - G2
+- GlobalSign Organization Validation CA - G2
+- GlobalSign Root CA
+- Go Daddy Root Certificate Authority - G2
+- Go Daddy Secure Certificate Authority - G2
+- Let's Encrypt Authority X3
+- Microsec e-Szigno Root CA 2009
+- QuoVadis Root CA2 G3
+- RapidSSL RSA CA 2018
+- Security Communication RootCA1
+- Security Communication RootCA2
+- Security Communication RootCA3
+- Symantec Class 3 EV SSL CA - G3
+- Symantec Class 3 Secure Server CA - G4
+- Symantec Enterprise Mobile Root for Microsoft
+- Thawte Primary Root CA
+- Thawte Primary Root CA - G2
+- Thawte Primary Root CA - G3
+- Thawte RSA CA 2018
+- Thawte Timestamping CA
+- TrustAsia TLS RSA CA
+- VeriSign Class 3 Extended Validation SSL CA
+- VeriSign Class 3 Extended Validation SSL SGC CA
+- VeriSign Class 3 Public Primary Certification Authority - G5
+- VeriSign International Server CA - Class 3
+- VeriSign Time Stamping Service Root
+- VeriSign Universal Root Certification Authority
frontdoor Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/troubleshoot-compression.md
+
+ Title: Troubleshooting file compression in Azure Front Door Standard/Premium
+description: Learn how to troubleshoot issues with file compression in Azure Front Door. This article covers several possible causes.
++++ Last updated : 02/18/2020+++
+# Troubleshooting Azure Front Door Standard/Premium file compression
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+This article helps you troubleshoot Azure Front Door Standard/Premium file compression issues.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Symptom
+
+Compression for your route is enabled, but files are being returned uncompressed.
+
+> [!TIP]
+> To check whether your files are being returned compressed, you need to use a tool like [Fiddler](https://www.telerik.com/fiddler) or your browser's [developer tools](https://developer.microsoft.com/microsoft-edge/platform/documentation/f12-devtools-guide/). Check the HTTP response headers returned with your cached CDN content. If there is a header named `Content-Encoding` with a value of **gzip**, **bzip2**, or **deflate**, your content is compressed.
+>
+> ![Content-Encoding header](../media/troubleshoot-compression/content-header.png)
+>
+
+## Cause
+
+There are several possible causes, including:
+
+* The requested content isn't eligible for compression.
+* Compression isn't enabled for the requested file type.
+* The HTTP request didn't include a header requesting a valid compression type.
+* Origin is sending chunked content.
+
+## Troubleshooting steps
+
+> [!TIP]
+> As with deploying new endpoints, AFD configuration changes take some time to propagate through the network. Usually, changes are applied within 90 minutes. If this is the first time you've set up compression for your CDN endpoint, you should consider waiting 1-2 hours to be sure the compression settings have propagated to the POPs.
+>
+
+### Verify the request
+
+First, we should double check on the request. You can use your browser's **[developer tools](https://developer.microsoft.com/microsoft-edge/platform/documentation/f12-devtools-guide/)** to view the requests being made.
+
+* Verify the request is being sent to your endpoint URL,`<endpointname>.z01.azurefd.net`, and not your origin.
+* Verify the request contains an **Accept-Encoding** header, and the value for that header contains **gzip**, **deflate**, or **bzip2**.
+
+![CDN request headers](../media/troubleshoot-compression/request-headers.png)
+
+### Verify compression settings
+
+Navigate to your endpoint in the [Azure portal](https://portal.azure.com) and select the **Configure** button in the Routes panel. Verify compression is **enabled**.
+
+![CDN compression settings](../media/troubleshoot-compression/compression-settings.png)
+
+### Check the request at the origin server for a **Via** header
+
+The **Via** HTTP header indicates to the web server that the request is being passed by a proxy server. Microsoft IIS web servers by default don't compress responses when the request contains a **Via** header. To override this behavior, do the following:
+
+* **IIS 6**: Set HcNoCompressionForProxies="FALSE" in the IIS Metabase properties. For for information, see [IIS 6 Compression](/previous-versions/iis/6.0-sdk/ms525390(v=vs.90)).
+* **IIS 7 and up**: Set both **noCompressionForHttp10** and **noCompressionForProxies** to *False* in the server configuration. For more information, see, [HTTP Compression](https://www.iis.net/configreference/system.webserver/httpcompression).
frontdoor Troubleshoot Cross Origin Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/troubleshoot-cross-origin-resources.md
+
+ Title: Using Azure Front Door Standard/Premium with Cross-Origin Resource Sharing
+description: Learn how to use the Azure Front Door (AFD) to with Cross-Origin Resource Sharing (CORS).
++++ Last updated : 02/18/2021+++
+# Using Azure Front Door Standard/Premium with Cross-Origin Resource Sharing (CORS)
+
+> [!Note]
+> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
+
+## What is CORS?
+
+CORS (Cross Origin Resource Sharing) is an HTTP feature that enables a web application running under one domain to access resources in another domain. To reduce the possibility of cross-site scripting attacks, all modern web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy). This prevents a web page from calling APIs in a different domain. CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin.
+
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## How it works
+
+There are two types of CORS requests, *simple requests* and *complex requests.*
+
+### For simple requests:
+
+1. The browser sends the CORS request with another **Origin** HTTP request header. The value of this header is the origin that served the parent page, which is defined as the combination of *protocol,* *domain,* and *port.* When a page from https\://www.contoso.com attempts to access a user's data in the fabrikam.com origin, the following request header would be sent to fabrikam.com:
+
+ `Origin: https://www.contoso.com`
+
+2. The server may respond with any of the following:
+
+ * An **Access-Control-Allow-Origin** header in its response indicating which origin site is allowed. For example:
+
+ `Access-Control-Allow-Origin: https://www.contoso.com`
+
+ * An HTTP error code such as 403 if the server doesn't allow the cross-origin request after checking the Origin header
+
+ * An **Access-Control-Allow-Origin** header with a wildcard that allows all origins:
+
+ `Access-Control-Allow-Origin: *`
+
+### For complex requests:
+
+A complex request is a CORS request where the browser is required to send a *preflight request* (that is, a preliminary probe) before sending the actual CORS request. The preflight request asks the server permission if the original CORS request can continue and is an `OPTIONS` request to the same URL.
+
+> [!TIP]
+> For more details on CORS flows and common pitfalls, view the [Guide to CORS for REST APIs](https://www.moesif.com/blog/technical/cors/Authoritative-Guide-to-CORS-Cross-Origin-Resource-Sharing-for-REST-APIs/).
+>
+>
+
+## Wildcard or single origin scenarios
+CORS on Azure Front Door will work automatically with no extra configuration when the **Access-Control-Allow-Origin** header is set to wildcard (*) or a single origin. Azure Front Door will cache the first response and ensuing requests will use the same header.
+
+If requests have already been made to the Azure Front Door before CORS being set on your origin, you'll need to purge content on your endpoint content to reload the content with the **Access-Control-Allow-Origin** header.
+
+## Multiple origin scenarios
+If you need to allow a specific list of origins to be allowed for CORS, things get a little more complicated. The problem occurs when the CDN caches the **Access-Control-Allow-Origin** header for the first CORS origin. When a different CORS origin makes another request, the CDN will serve the cached **Access-Control-Allow-Origin** header, which won't match. There are several ways to correct this problem.
+
+### Azure Front Door Rule Set
+
+On Azure Front Door, you can create a rule in the Azure Front Door [Rules Set](concept-rule-set.md) to check the **Origin** header on the request. If it's a valid origin, your rule will set the **Access-Control-Allow-Origin** header with the correct value. In this case, the **Access-Control-Allow-Origin** header from the file's origin server is ignored and the AFD's rules engine completely manages the allowed CORS origins.
++
+> [!TIP]
+> You can add additional actions to your rule to modify additional response headers, such as **Access-Control-Allow-Methods**.
+>
+
+## Next steps
+
+* Learn how to [create a Front Door](create-front-door-portal.md).
frontdoor Troubleshoot Route Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/troubleshoot-route-issues.md
+
+ Title: Troubleshoot Azure Front Door Standard/Premium configuration problems
+description: In this tutorial, you'll learn how to troubleshoot some of the common problems that you might face for your Azure Front Door Standard/Premium instance.
++++ Last updated : 02/18/2021+++
+# Troubleshooting common routing problems with Azure Front Door Standard/Premium
+
+This article describes how to troubleshoot common routing problems that you might face for your Azure Front Door configuration.
+
+## 503 response from Azure Front Door after a few seconds
+
+### Symptom
+
+* Regular requests sent to your backend without going through Azure Front Door are succeeding. Going via Azure Front Door results in 503 error responses.
+* The failure from Azure Front Door typically shows after about 30 seconds.
+
+### Cause
+
+The cause of this problem can be one of two things:
+
+* Your origin is taking longer than the timeout configured (default is 30 seconds) to receive the request from Azure Front Door.
+* The time it takes to send a response to the request from Azure Front Door is taking longer than the timeout value.
+
+### Troubleshooting steps
+
+* Send the request to your backend directly (without going through Azure Front Door). See how long your backend usually takes to respond.
+* Send the request via Azure Front Door and see if you're getting any 503 responses. If not, the problem might not be a timeout issue. Contact support.
+* If going through Azure Front Door results in a 503 error response code, configure the `sendReceiveTimeout` field for Azure Front Door. You can extend the default timeout up to 4 minutes (240 seconds). The setting is under `Endpoint Setting` and is called `Origin response timeout`.
+
+## Requests sent to the custom domain return a 400 status code
+
+### Symptom
+
+* You created an Azure Front Door instance, but a request to the domain or frontend host is returning an HTTP 400 status code.
+* You created a DNS mapping for a custom domain to the frontend host that you configured. However, sending a request to the custom domain host name returns an HTTP 400 status code. It doesn't appear to route to the backend that you configured.
+
+### Cause
+
+The problem occurs if you didn't configure a routing rule for the custom domain that was added as the frontend host. A routing rule needs to be explicitly added for that frontend host. That's true even if a routing rule has already been configured for the frontend host under the Azure Front Door subdomain (*.azurefd.net).
+
+### Troubleshooting steps
+
+Add a routing rule for the custom domain to direct traffic to the selected origin group.
+
+## Azure Front Door doesn't redirect HTTP to HTTPS
+
+### Symptom
+
+Azure Front Door has a routing rule that redirects HTTP to HTTPS, but accessing the domain still maintains HTTP as the protocol.
+
+### Cause
+
+This behavior can happen if you didn't configure the routing rules correctly for Azure Front Door. Basically, your current configuration isn't specific and might have conflicting rules.
+
+### Troubleshooting steps
++
+## Request to the frontend host name returns a 411 status code
+
+### Symptom
+
+You created an Azure Front Door Standard/Premium instance and configured a frontend host, an origin group with at least one origin in it, and a routing rule that connects the frontend host to the origin group. Your content doesn't seem to be available when a request goes to the configured frontend host because an HTTP 411 status code gets returned.
+
+Responses to these requests might also contain an HTML error page in the response body that includes an explanatory statement. For example: `HTTP Error 411. The request must be chunked or have a content length`.
+
+### Cause
+
+There are several possible causes for this symptom. The overall reason is that your HTTP request isn't fully RFC-compliant.
+
+An example of noncompliance is a `POST` request sent without either a `Content-Length` or a `Transfer-Encoding` header (for example, using `curl -X POST https://example-front-door.domain.com`). This request doesn't meet the requirements set out in [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.3.2). Azure Front Door would block it with an HTTP 411 response.
+
+This behavior is separate from the Web Application Firewall (WAF) functionality of Azure Front Door. Currently, there's no way to disable this behavior. All HTTP requests must meet the requirements, even if the WAF functionality isn't in use.
+
+### Troubleshooting steps
+
+- Verify that your requests are in compliance with the requirements set out in the necessary RFCs.
+
+- Take note of any HTML message body that's returned in response to your request. A message body often explains exactly *how* your request is noncompliant.
+
+## Next steps
+
+Learn how to [create a Front Door Standard/Premium](create-front-door-portal.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
Title: Deploy Azure Security Benchmark Foundation blueprint sample description: Deploy steps for the Azure Security Benchmark Foundation blueprint sample including blueprint artifact parameter details. Previously updated : 02/12/2020 Last updated : 02/17/2020 # Deploy the Azure Security Benchmark Foundation blueprint sample
to make each deployment of the copy of the blueprint sample unique.
- **Network Watcher name**: Name for the Network Watcher resource - **Network Watcher resource group name**: Name for the Network Watcher resource group - **Enable DDoS protection**: Enter 'true' or 'false' to specify whether or not DDoS Protection is enabled in the virtual network
+
+ > [!NOTE]
+ > If Network Watcher is already enabled, it's recommended that you use the existing
+ > Network Watcher resource group. You must also provide the location for the existing Network
+ > Watcher resource group for the artifact parameter **Network Watcher resource group location**.
- Artifact parameters
The following table provides a list of the blueprint parameters:
|Azure Virtual Network spoke template|Resource Manager template|Subnet address names (optional)|Array of subnet names to deploy to the spoke virtual network; for example, "subnet1","subnet2"| |Azure Virtual Network spoke template|Resource Manager template|Subnet address prefixes (optional)|Array of IP address prefixes for optional subnets for the spoke virtual network; for example, "10.0.7.0/24","10.0.8.0/24"| |Azure Virtual Network spoke template|Resource Manager template|Deploy spoke|Enter 'true' or 'false' to specify whether the assignment deploys the spoke components of the architecture|
-|Azure Network Watcher template|Resource Manager template|Network Watcher location|If Network Watcher is already enabled, this parameter value **must** match the location of the existing Network Watcher resource group.|
+|Azure Network Watcher template|Resource Manager template|Network Watcher location|Location for the Network Watcher resource|
|Azure Network Watcher template|Resource Manager template|Network Watcher resource group location|If Network Watcher is already enabled, this parameter value **must** match the name of the existing Network Watcher resource group.|
+## Troubleshooting
+
+If you encounter the error `The resource group 'NetworkWatcherRG' failed to deploy due to the
+following error: Invalid resource group location '{location}'. The Resource group already exists in
+location '{location}'.`, check that the blueprint parameter **Network Watcher resource group name**
+specifies the existing Network Watcher resource group name and that the artifact parameter **Network
+Watcher resource group location** specifies the existing Network Watcher resource group location.
+ ## Next steps Now that you've reviewed the steps to deploy the Azure Security Benchmark Foundation blueprint sample, visit the
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/index.md
Title: Azure Security Benchmark Foundation blueprint sample overview description: Overview and architecture of the Azure Security Benchmark Foundation blueprint sample. Previously updated : 02/12/2020 Last updated : 02/17/2020 # Overview of the Azure Security Benchmark Foundation blueprint sample
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/guides/developer/azure-developer-guide.md
When you allow access to Azure resources, it's always a best practice to provide
> **When to use**: When you need fine-grained access management for users and groups or when you need to make a user an owner of a subscription. >
- > **Get started**: To learn more, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+ > **Get started**: To learn more, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
- **Service principal objects**: Along with providing access to user principals and groups, you can grant the same access to a service principal.
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/guides/operations/azure-operations-guide.md
If you exceed the credit amount, your service is disabled until the next month s
Azure RBAC has several built-in roles that you can use to assign permissions. To make a user an administrator of an Azure subscription, assign them the [Owner](../../role-based-access-control/built-in-roles.md#owner) role at the subscription scope. The Owner role gives the user full access to all resources in the subscription, including the right to delegate access to others.
-For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
### View billing information in the Azure portal
Here are a few examples of [built-in roles in Azure](../../role-based-access-con
- **Storage Account Contributor**: A user with this role can manage storage accounts but cannot manage access to the storage accounts.
-For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Azure Virtual Machines
hdinsight Hdinsight Administer Use Portal Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-administer-use-portal-linux.md
Select your cluster name from the [**HDInsight clusters**](#showClusters) page.
||| |Overview|Provides general information for your cluster.| |Activity log|Show and query activity logs.|
- |Access control (IAM)|Use role assignments. See [Use role assignments to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).|
+ |Access control (IAM)|Use role assignments. See [Assign Azure roles to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).|
|Tags|Allows you to set key/value pairs to define a custom taxonomy of your cloud services. For example, you may create a key named **project**, and then use a common value for all services associated with a specific project.| |Diagnose and solve problems|Display troubleshooting information.| |Quickstart|Displays information that helps you get started using HDInsight.|
hdinsight Hdinsight Create Non Interactive Authentication Dotnet Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-create-non-interactive-authentication-dotnet-applications.md
An HDInsight cluster. See the [getting started tutorial](hadoop/apache-hadoop-li
## Assign a role to the Azure AD application
-Assign your Azure AD application a [role](../role-based-access-control/built-in-roles.md), to grant it permissions to perform actions. You can set the scope at the level of the subscription, resource group, or resource. The permissions are inherited to lower levels of scope. For example, adding an application to the Reader role for a resource group means that the application can read the resource group and any resources in it. In this article, you set the scope at the resource group level. For more information, see [Use role assignments to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).
+Assign your Azure AD application a [role](../role-based-access-control/built-in-roles.md), to grant it permissions to perform actions. You can set the scope at the level of the subscription, resource group, or resource. The permissions are inherited to lower levels of scope. For example, adding an application to the Reader role for a resource group means that the application can read the resource group and any resources in it. In this article, you set the scope at the resource group level. For more information, see [Assign Azure roles to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md).
**To add the Owner role to the Azure AD application**
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
Someone with at least Contributor access to the Azure subscription must have pre
Get more information on working with access management: - [Get started with access management in the Azure portal](../role-based-access-control/overview.md)-- [Use role assignments to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles to manage access to your Azure subscription resources](../role-based-access-control/role-assignments-portal.md)
## Methods for using script actions
hpc-cache Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-administer.md
If your administrator creates a custom theme for your application, this page inc
Use the **Delete** button to permanently delete your IoT Central application. This action permanently deletes all data that's associated with the application. > [!Note]
-> To delete an application, you must also have permissions to delete resources in the Azure subscription you chose when you created the application. To learn more, see [Use role-based access control to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+> To delete an application, you must also have permissions to delete resources in the Azure subscription you chose when you created the application. To learn more, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
## Manage programmatically
iot-hub About Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/about-iot-hub.md
IoT Hub and the device SDKs support the following protocols for connecting devic
* MQTT * MQTT over WebSockets
+IoT Hub and the device SDKs support the [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) conventions for connecting devices. IoT Plug and Play devices use a device model to advertise their capabilities to IoT Plug and Play-enabled applications. The device model enables solution builders to integrate smart devices with their solutions without any manual configuration.
+ If your solution cannot use the device libraries, devices can use the MQTT v3.1.1, HTTPS 1.1, or AMQP 1.0 protocols to connect natively to your hub. If your solution cannot use one of the supported protocols, you can extend IoT Hub to support custom protocols:
iot-hub Iot Hub Devguide C2d Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-c2d-guidance.md
IoT Hub provides three options for device apps to expose functionality to a back
* [Cloud-to-device messages](iot-hub-devguide-messages-c2d.md) for one-way notifications to the device app.
+To learn how [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) uses these options to control IoT Plug and Play devices, see [IoT Plug and Play service developer guide](../iot-pnp/concepts-developer-guide-service.md).
+ [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)] Here is a detailed comparison of the various cloud-to-device communication options.
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-device-twins.md
Refer to [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.
Refer to [Cloud-to-device communication guidance](iot-hub-devguide-c2d-guidance.md) for guidance on using desired properties, direct methods, or cloud-to-device messages.
+To learn how device twins relate to the device model used by an Azure IoT Plug and Play device, see [Understand IoT Plug and Play digital twins](../iot-pnp/concepts-digital-twin.md).
+ ## Device twins Device twins store device-related information that:
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
-* **Sending device telemetry messages as well as events** namely, device lifecycle events, and device twin change events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints).
+* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, and digital twin change events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-pnp/concepts-digital-twin.md).
* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
You can enable/disable the fallback route in the Azure portal->Message Routing b
## Non-telemetry events
-In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, and digital twin change events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device was deleted or created. Finally, as part of the [IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin [property](../iot-pnp/iot-plug-and-play-glossary.md) is set or changed, a [digital twin](../iot-pnp/iot-plug-and-play-glossary.md) is replaced, or when a change event happens for the underlying device twin.
+In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, and digital twin change events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device was deleted or created. Finally, as part of [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin [property](../iot-pnp/iot-plug-and-play-glossary.md) is set or changed, a [digital twin](../iot-pnp/iot-plug-and-play-glossary.md) is replaced, or when a change event happens for the underlying device twin.
[IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario.
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-sdks.md
There are two categories of software development kits (SDKs) for working with IoT Hub:
-* **IoT Hub Device SDKs** enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
+* **IoT Hub Device SDKs** enable you to build apps that run on your IoT devices using device client or module client. These apps send telemetry to your IoT hub, and optionally receive messages, job, method, or twin updates from your IoT hub. You can use these SDKs to build device apps that use [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) conventions and models to advertise their capabilities to IoT Plug and Play-enabled applications. You can also use module client to author [modules](../iot-edge/iot-edge-modules.md) for [Azure IoT Edge runtime](../iot-edge/about-iot-edge.md).
* **IoT Hub Service SDKs** enable you to build backend applications to manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.
Azure IoT Hub device SDK for .NET:
* Download from [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client/). The namespace is Microsoft.Azure.Devices.Clients, which contains IoT Hub Device Clients (DeviceClient, ModuleClient). * [Source code](https://github.com/Azure/azure-iot-sdk-csharp)
-* [API reference](/dotnet/api/microsoft.azure.devices?view=azure-dotnet)
-* [Module reference](/dotnet/api/microsoft.azure.devices.client.moduleclient?view=azure-dotnet)
+* [API reference](/dotnet/api/microsoft.azure.devices?view=azure-dotnet&preserve-view=true)
+* [Module reference](/dotnet/api/microsoft.azure.devices.client.moduleclient?view=azure-dotnet&preserve-view=true)
Azure IoT Hub device SDK for Embedded C (ANSI C - C99):
Azure IoT Hub device SDK for Java:
* Add to [Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-device-sdk) project * [Source code](https://github.com/Azure/azure-iot-sdk-java) * [API reference](/java/api/com.microsoft.azure.sdk.iot.device)
-* [Module reference](/java/api/com.microsoft.azure.sdk.iot.device.moduleclient?view=azure-java-stable)
+* [Module reference](/java/api/com.microsoft.azure.sdk.iot.device.moduleclient?view=azure-java-stable&preserve-view=true)
Azure IoT Hub device SDK for Node.js: * Install from [npm](https://www.npmjs.com/package/azure-iot-device) * [Source code](https://github.com/Azure/azure-iot-sdk-node)
-* [API reference](/javascript/api/azure-iot-device/?view=azure-iot-typescript-latest)
-* [Module reference](/javascript/api/azure-iot-device/moduleclient?view=azure-node-latest)
+* [API reference](/javascript/api/azure-iot-device/?view=azure-iot-typescript-latest&preserve-view=true)
+* [Module reference](/javascript/api/azure-iot-device/moduleclient?view=azure-node-latest&preserve-view=true)
Azure IoT Hub device SDK for Python:
Azure IoT Hub service SDK for Node.js:
* Download from [npm](https://www.npmjs.com/package/azure-iothub) * [Source code](https://github.com/Azure/azure-iot-sdk-node)
-* [API reference](/javascript/api/azure-iothub/?view=azure-iot-typescript-latest)
+* [API reference](/javascript/api/azure-iothub/?view=azure-iot-typescript-latest&preserve-view=true)
Azure IoT Hub service SDK for Python:
Azure Provisioning device and service SDKs for C#:
* Download from [Device SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) and [Service SDK](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) from NuGet. * [Source code](https://github.com/Azure/azure-iot-sdk-csharp/)
-* [API reference](/dotnet/api/microsoft.azure.devices.provisioning.client?view=azure-dotnet)
+* [API reference](/dotnet/api/microsoft.azure.devices.provisioning.client?view=azure-dotnet&preserve-view=true)
Azure Provisioning device and service SDKs for C:
Azure Provisioning device and service SDKs for Java:
* Add to [Maven](https://github.com/Azure/azure-iot-sdk-jav#for-the-service-sdk) project * [Source code](https://github.com/Azure/azure-iot-sdk-java/blob/master/provisioning)
-* [API reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device?view=azure-java-stable)
+* [API reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device?view=azure-java-stable&preserve-view=true)
Azure Provisioning device and service SDKs for Node.js: * [Source code](https://github.com/Azure/azure-iot-sdk-node/tree/master/provisioning)
-* [API reference](/javascript/api/overview/azure/iothubdeviceprovisioning?view=azure-node-latest)
+* [API reference](/javascript/api/overview/azure/iothubdeviceprovisioning?view=azure-node-latest&preserve-view=true)
* Download [Device SDK](https://badge.fury.io/js/azure-iot-provisioning-device) and [Service SDK](https://badge.fury.io/js/azure-iot-provisioning-service) from npm Azure Provisioning device and service SDKs for Python:
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-mqtt-support.md
The following table contains links to code samples for each supported language a
| Language | MQTT protocol parameter | MQTT over Web Sockets protocol parameter | | | | | [Node.js](https://github.com/Azure/azure-iot-sdk-node/blob/master/device/samples/simple_sample_device.js) | azure-iot-device-mqtt.Mqtt | azure-iot-device-mqtt.MqttWs |
-| [Java](https://github.com/Azure/azure-iot-sdk-java/blob/master/device/iot-device-samples/send-receive-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/SendReceive.java) |[IotHubClientProtocol](/java/api/com.microsoft.azure.sdk.iot.device.iothubclientprotocol?view=azure-java-stable).MQTT | IotHubClientProtocol.MQTT_WS |
+| [Java](https://github.com/Azure/azure-iot-sdk-java/blob/master/device/iot-device-samples/send-receive-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/SendReceive.java) |[IotHubClientProtocol](/java/api/com.microsoft.azure.sdk.iot.device.iothubclientprotocol?view=azure-java-stable&preserve-view=true).MQTT | IotHubClientProtocol.MQTT_WS |
| [C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_mqtt_dm) | [MQTT_Protocol](/azure/iot-hub/iot-c-sdk-ref/iothubtransportmqtt-h/mqtt-protocol) | [MQTT_WebSocket_Protocol](/azure/iot-hub/iot-c-sdk-ref/iothubtransportmqtt-websockets-h/mqtt-websocket-protocol) |
-| [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype?view=azure-dotnet).Mqtt | TransportType.Mqtt falls back to MQTT over Web Sockets if MQTT fails. To specify MQTT over Web Sockets only, use TransportType.Mqtt_WebSocket_Only |
+| [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/master/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype?view=azure-dotnet&preserve-view=true).Mqtt | TransportType.Mqtt falls back to MQTT over Web Sockets if MQTT fails. To specify MQTT over Web Sockets only, use TransportType.Mqtt_WebSocket_Only |
| [Python](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples) | Supports MQTT by default | Add `websockets=True` in the call to create the client | The following fragment shows how to specify the MQTT over Web Sockets protocol when using the Azure IoT Node.js SDK:
In the [IoT MQTT Sample repository](https://github.com/Azure-Samples/IoTMQTTSamp
These samples use the Eclipse Mosquitto library to send messages to the MQTT Broker implemented in the IoT hub.
+To learn how to adapt the samples to use the [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md) conventions, see [Tutorial - Use MQTT to develop an IoT Plug and Play device client](../iot-pnp/tutorial-use-mqtt.md).
+ This repository contains: **For Windows:**
If a device cannot use the device SDKs, it can still connect to the public devic
For more information about how to generate SAS tokens, see the device section of [Using IoT Hub security tokens](iot-hub-devguide-security.md#use-sas-tokens-in-a-device-app).
- When testing, you can also use the cross-platform [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/ext/azure-iot/iot/hub?view=azure-cli-latest#ext-azure-iot-az-iot-hub-generate-sas-token) to quickly generate a SAS token that you can copy and paste into your own code.
+ When testing, you can also use the cross-platform [Azure IoT Tools for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) or the CLI extension command [az iot hub generate-sas-token](/cli/azure/ext/azure-iot/iot/hub?view=azure-cli-latest#ext-azure-iot-az-iot-hub-generate-sas-token&preserve-view=true) to quickly generate a SAS token that you can copy and paste into your own code.
### For Azure IoT Tools
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/move-subscription.md
For assigning policies, see:
- [Assign an access policy using PowerShell](assign-access-policy-powershell.md) For adding role assignments, see:-- [Add role assignment using Portal](../../role-based-access-control/role-assignments-portal.md)-- [Add role assignment using Azure CLI](../../role-based-access-control/role-assignments-cli.md)-- [Add role assignment using PowerShell](../../role-based-access-control/role-assignments-powershell.md)
+- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md)
+- [Assign Azure roles using PowerShell](../../role-based-access-control/role-assignments-powershell.md)
### Update managed identities
key-vault About Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/about-keys.md
tags: azure-resource-manager
Previously updated : 09/15/2020 Last updated : 02/17/2021 # About keys
-Azure Key Vault provides two types of resources to store and manage cryptographic keys:
+Azure Key Vault provides two types of resources to store and manage cryptographic keys. Vaults support software-protected and HSM-protected (Hardware Security Module) keys. Managed HSMs only support HSM-protected keys.
|Resource type|Key protection methods|Data-plane endpoint base URL| |--|--|--| | **Vaults** | Software-protected<br/><br/>and<br/><br/>HSM-protected (with Premium SKU)</li></ul> | https://{vault-name}.vault.azure.net |
-| **Managed HSM pools** | HSM-protected | https://{hsm-name}.managedhsm.azure.net |
+| **Managed HSMs ** | HSM-protected | https://{hsm-name}.managedhsm.azure.net |
|||| - **Vaults** - Vaults provide a low-cost, easy to deploy, multi-tenant, zone-resilient (where available), highly available key management solution suitable for most common cloud application scenarios.
The base JWK/JWA specifications are also extended to enable key types unique to
HSM-protected keys (also referred to as HSM-keys) are processed in an HSM (Hardware Security Module) and always remain HSM protection boundary. - Vaults use **FIPS 140-2 Level 2** validated HSMs to protect HSM-keys in shared HSM backend infrastructure. -- Managed HSM pools uses **FIPS 140-2 Level 3** validated HSM modules to protect your keys. Each HSM pool is an isolated single-tenant instance with it's own [security domain](../managed-hsm/security-domain.md) providing complete cryptographic isolation from all other HSM pools sharing the same hardware infrastructure.
+- Managed HSM uses **FIPS 140-2 Level 3** validated HSM modules to protect your keys. Each HSM pool is an isolated single-tenant instance with its own [security domain](../managed-hsm/security-domain.md) providing complete cryptographic isolation from all other HSMs sharing the same hardware infrastructure.
These keys are protected in single-tenant HSM-pools. You can import an RSA, EC, and symmetric key, in soft form or by exporting from a supported HSM device. You can also generate keys in HSM pools. When you import HSM keys using the method described in the [BYOK (bring your own key) specification](../keys/byok-specification.md), it enables secure transportation key material into Managed HSM pools.
For more information on geographical boundaries, see [Microsoft Azure Trust Cent
## Key types and protection methods
-Key Vault supports RSA, EC and symmetric keys.
+Key Vault supports RSA and EC keys. Managed HSM supports RSA, EC, and symmetric keys.
### HSM-protected keys
-|Key type|Vaults (Premium SKU only)|Managed HSM pools|
-|--|--|--|--|
-**EC-HSM**: Elliptic Curve key|FIPS 140-2 Level 2 HSM|FIPS 140-2 Level 3 HSM
-**RSA-HSM**: RSA key|FIPS 140-2 Level 2 HSM|FIPS 140-2 Level 3 HSM
-**oct-HSM**: Symmetric|Not supported|FIPS 140-2 Level 3 HSM
-||||
+|Key type|Vaults (Premium SKU only)|Managed HSMs|
+|--|--|--|
+|**EC-HSM**: Elliptic Curve key | Supported | Supported|
+|**RSA-HSM**: RSA key|Supported|Supported|
+|**oct-HSM**: Symmetric key|Not supported|Supported|
+|||
### Software-protected keys
-|Key type|Vaults|Managed HSM pools|
-|--|--|--|--|
-**RSA**: "Software-protected" RSA key|FIPS 140-2 Level 1|Not supported
-**EC**: "Software-protected" Elliptic Curve key|FIPS 140-2 Level 1|Not supported
-||||
+|Key type|Vaults|Managed HSMs|
+|--|--|--|
+**RSA**: "Software-protected" RSA key|Supported|Not supported
+**EC**: "Software-protected" Elliptic Curve key|Supported|Not supported
+|||
+
+### Compliance
+
+|Key type and destination|Compliance|
+|||
+|Software-protected keys in vaults (Premium & Standard SKUs) | FIPS 140-2 Level 1|
+|HSM-protected keys in vaults (Premium SKU)| FIPS 140-2 Level 2|
+|HSM-protected keys in Managed HSM|FIPS 140-2 Level 3|
+|||
++
-Please see [Key types, algorithms, and operations](about-keys-details.md) for details about each key type, algorithms, operations, attributes and tags.
+See [Key types, algorithms, and operations](about-keys-details.md) for details about each key type, algorithms, operations, attributes, and tags.
## Next steps - [About Key Vault](../general/overview.md)
Please see [Key types, algorithms, and operations](about-keys-details.md) for de
- [About certificates](../certificates/about-certificates.md) - [Key Vault REST API overview](../general/about-keys-secrets-certificates.md) - [Authentication, requests, and responses](../general/authentication-requests-and-responses.md)-- [Key Vault Developer's Guide](../general/developers-guide.md)
+- [Key Vault Developer's Guide](../general/developers-guide.md)
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-template.md
To complete this article: - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- User would need to have RBAC bult-in role assigned eg. contributor. [Learn more here](../../role-based-access-control/role-assignments-portal.md)
+- User would need to have an Azure built-in role assigned eg. contributor. [Learn more here](../../role-based-access-control/role-assignments-portal.md)
- Your Azure AD user object ID is needed by the template to configure permissions. The following procedure gets the object ID (GUID). 1. Run the following Azure PowerShell or Azure CLI command by select **Try it**, and then paste the script into the shell pane. To paste the script, right-click the shell, and then select **Paste**.
lighthouse Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/managed-services-offers.md
Title: Managed Service offers in Azure Marketplace description: Managed Service offers let you sell resource management offers to customers in Azure Marketplace. Previously updated : 02/10/2021 Last updated : 02/17/2021
Managed Service offers streamline the process of onboarding customers to Azure L
After that, users in your organization will be able to work on those resources from within your managing tenant through [Azure delegated resource management](azure-delegated-resource-management.md), according to the access you defined when creating the offer. This is done through a manifest that specifies the Azure Active Directory (Azure AD) users, groups, and service principals that will have access to customer resources, along with [roles](tenants-users-roles.md) that define their level of access.
+> [!NOTE]
+> Managed Service offers may not be available in Azure Government and other national clouds.
+ ## Public and private offers
-Each Managed Services offer includes one or more plans. Plans can be either private or public.
+Each Managed Service offer includes one or more plans. Plans can be either private or public.
If you want to limit your offer to specific customers, you can publish a private plan. When you do so, the plan can only be purchased for the specific subscription IDs that you provide. For more info, see [Private offers](../../marketplace/private-offers.md).
If appropriate, you can include both public and private plans in the same offer.
## Publish Managed Service offers
-To learn how to publish a Managed Services offer, see [Publish a Managed Services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md).
+To learn how to publish a Managed Service offer, see [Publish a Managed Service offer to Azure Marketplace](../how-to/publish-managed-services-offers.md).
## Next steps - Learn about [Azure delegated resource management](azure-delegated-resource-management.md) and [cross-tenant management experiences](cross-tenant-management-experience.md).-- [Publish Managed Services offers](../how-to/publish-managed-services-offers.md) to Azure Marketplace.
+- [Publish Managed Service offers](../how-to/publish-managed-services-offers.md) to Azure Marketplace.
lighthouse Publish Managed Services Offers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/publish-managed-services-offers.md
Title: Publish a Managed Service offer to Azure Marketplace description: Learn how to publish a Managed Service offer that onboards customers to Azure Lighthouse. Previously updated : 02/16/2021 Last updated : 02/17/2021
The following table can help determine whether to onboard customers by publishin
|Can use automation to onboard multiple subscriptions, resource groups, or customers |No |Yes | |Immediate access to new built-in roles and Azure Lighthouse features |Not always (generally available after some delay) |Yes |
+> [!NOTE]
+> Managed Service offers may not be available in Azure Government and other national clouds.
+ ## Create your offer For detailed instructions about how to create your offer, including all of the information and assets you'll need to provide, see [Create a Managed Service offer](../../marketplace/create-managed-service-offer.md).
machine-learning Manage New Webservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-new-webservice.md
If the user does not have the correct permissions to access resources in the Azu
For more information on creating a workspace, see [Create and share an Azure Machine Learning Studio (classic) workspace](create-workspace.md).
-For more information on setting access permissions, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For more information on setting access permissions, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Manage New Web services
machine-learning Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-workspace.md
To manage the web services associated with this Studio (classic) workspace, use
> [!NOTE] > To deploy or manage New web services you must be assigned a contributor or administrator role on the subscription to which the web service is deployed. If you invite another user to a machine learning Studio (classic) workspace, you must assign them to a contributor or administrator role on the subscription before they can deploy or manage web services. >
->For more information on setting access permissions, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+>For more information on setting access permissions, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps * Learn more about [deploy Machine Learning with Azure Resource Manager Templates](deploy-with-resource-manager-template.md).
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
explained_variance|Explained variance measures the extent to which a model accou
mean_absolute_error|Mean absolute error is the expected value of absolute value of difference between the target and the prediction.<br><br> **Objective:** Closer to 0 the better <br> **Range:** [0, inf) <br><br> Types: <br>`mean_absolute_error` <br> `normalized_mean_absolute_error`, the mean_absolute_error divided by the range of the data. | [Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html)| mean_absolute_percentage_error|Mean absolute percentage error (MAPE) is a measure of the average difference between a predicted value and the actual value.<br><br> **Objective:** Closer to 0 the better <br> **Range:** [0, inf) || median_absolute_error|Median absolute error is the median of all absolute differences between the target and the prediction. This loss is robust to outliers.<br><br> **Objective:** Closer to 0 the better <br> **Range:** [0, inf)<br><br>Types: <br> `median_absolute_error`<br> `normalized_median_absolute_error`: the median_absolute_error divided by the range of the data. |[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.median_absolute_error.html)|
-r2_score|R^2 is the coefficient of determination or the percent reduction in squared errors compared to a baseline model that outputs the mean. <br> <br> **Objective:** Closer to 1 the better <br> **Range:** [-1, 1] <br><br> Note: R^2 often has the range (-inf, 1], but Automated ML clips negative values for very bad models to -1.|[Calculation](https://scikit-learn.org/0.16/modules/generated/sklearn.metrics.r2_score.html)|
+r2_score|R<sup>2</sup> (the coefficient of determination) measures the proportional reduction in mean squared error (MSE) relative to the total variance of the observed data. <br> <br> **Objective:** Closer to 1 the better <br> **Range:** [-1, 1]<br><br>Note: R<sup>2</sup> often has the range (-inf, 1]. The MSE can be larger than the observed variance, so R<sup>2</sup> can have arbitrarily large negative values, depending on the data and the model predictions. Automated ML clips reported R<sup>2</sup> scores at -1, so a value of -1 for R<sup>2</sup> likely means that the true R<sup>2</sup> score is less than -1. Consider the other metrics values and the properties of the data when interpreting a negative R<sup>2</sup> score.|[Calculation](https://scikit-learn.org/0.16/modules/generated/sklearn.metrics.r2_score.html)|
root_mean_squared_error |Root mean squared error (RMSE) is the square root of the expected squared difference between the target and the prediction. For an unbiased estimator, RMSE is equal to the standard deviation.<br> <br> **Objective:** Closer to 0 the better <br> **Range:** [0, inf)<br><br>Types:<br> `root_mean_squared_error` <br> `normalized_root_mean_squared_error`: the root_mean_squared_error divided by the range of the data. |[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html)| root_mean_squared_log_error|Root mean squared log error is the square root of the expected squared logarithmic error.<br><br>**Objective:** Closer to 0 the better <br> **Range:** [0, inf) <br> <br>Types: <br>`root_mean_squared_log_error` <br> `normalized_root_mean_squared_log_error`: the root_mean_squared_log_error divided by the range of the data. |[Calculation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html)| spearman_correlation| Spearman correlation is a nonparametric measure of the monotonicity of the relationship between two datasets. Unlike the Pearson correlation, the Spearman correlation does not assume that both datasets are normally distributed. Like other correlation coefficients, Spearman varies between -1 and 1 with 0 implying no correlation. Correlations of -1 or 1 imply an exact monotonic relationship. <br><br> Spearman is a rank-order correlation metric meaning that changes to predicted or actual values will not change the Spearman result if they do not change the rank order of predicted or actual values.<br> <br> **Objective:** Closer to 1 the better <br> **Range:** [-1, 1]|[Calculation](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.stats.spearmanr.html)|
marketplace Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/anomaly-detection.md
Title: Anomaly detection for metered billing | Azure Marketplace
-description: Learn how automatic anomaly detection for metered billing helps insure your customers are billed correctly for metered usage of your commercial marketplace offer.
+ Title: Manage metered billing anomalies in Partner Center | Azure Marketplace
+description: Learn how automatic anomaly detection for metered billing helps insure your customers are billed correctly for metered usage of your commercial marketplace offers.
Previously updated : 2/17/2021 Last updated : 2/18/2021
-# Anomaly detection for metered billing
+# Manage metered billing anomalies in Partner Center
The custom metered billing option is currently available to [Software as a service](plan-saas-offer.md) (SaaS) offers and [Azure Applications](plan-azure-application-offer.md#types-of-plans) with a Managed application plan.
After you mark an overage usage as an anomaly or acknowledge a model that flagge
## See also - [Metered billing for SaaS using the commercial marketplace metering service](./partner-center-portal/saas-metered-billing.md) - [Managed application metered billing](./partner-center-portal/azure-app-metered-billing.md)
+- [Anomaly detection service for metered billing](./partner-center-portal/anomaly-detection-service-for-metered-billing.md)
marketplace Anomaly Detection Service For Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/anomaly-detection-service-for-metered-billing.md
The model works by generating retrospective confidence intervals. The time serie
## Anomaly detection notification
-You can evaluate, manage, and acknowledge anomalies in Partner Center. To learn how, see [Anomaly detection for metered billing](../anomaly-detection.md).
+You can evaluate, manage, and acknowledge anomalies in Partner Center. To learn how, see [Manage metered billing anomalies in Partner Center](../anomaly-detection.md).
To ensure that your customers are not overcharged for metered usage, you should investigate if detected anomalies are real issues. If so, you can acknowledge the incorrect usage in Partner Center.
For more publisher support options, see [Support for the commercial marketplace
## Next steps - Learn about the [Marketplace metering service API](marketplace-metering-service-apis.md).-- [Anomaly detection for metered billing](../anomaly-detection.md)
+- [Manage metered billing anomalies in Partner Center](../anomaly-detection.md)
media-services Cli Create Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/cli-create-jobs.md
In Media Services v3, when you submit Jobs to process your videos, you have to t
[Create a Media Services account](./create-account-howto.md). - ## Example script When you run `az ams job start`, you can set a label on the job's output. The label can later be used to identify what this output asset is for.
media-services Cli Publish Asset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/cli-publish-asset.md
The Azure CLI script in this article shows how to create a Streaming Locator and
[Create a Media Services account](./create-account-howto.md). - ## Example script [!code-azurecli-interactive[main](../../../cli_scripts/media-services/publish-asset/Publish-Asset.sh "Publish an asset")]
media-services Cli Reset Account Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/cli-reset-account-credentials.md
The Azure CLI script in this article shows how to reset your account credentials
[Create a Media Services account](./create-account-howto.md). - ## Example script ```azurecli-interactive
media-services Configure Connect Nodejs Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/configure-connect-nodejs-howto.md
na ms.devlang: na Previously updated : 08/31/2020 Last updated : 02/17/2021
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article shows you how to connect to the Azure Media Services v3 node.js SDK using the service principal sign in method.
+This article shows you how to connect to the Azure Media Services v3 node.js SDK using the service principal sign-in method.
## Prerequisites - Install [Node.js](https://nodejs.org/en/download/).
+- Install [Typescript](https://www.typescriptlang.org/download).
- [Create a Media Services account](./create-account-howto.md). Be sure to remember the resource group name and the Media Services account name. > [!IMPORTANT]
-> Review [naming conventions](media-services-apis-overview.md#naming-conventions).
+> Review the Azure Media Services [naming conventions](media-services-apis-overview.md#naming-conventions) to understand the important naming restrictions on entities.
-## Create package.json
+## Reference documentation for @Azure/arm-mediaservices
+- [Reference documentation for Azure Media Services modules for Node.js](https://docs.microsoft.com/javascript/api/overview/azure/media-services?view=azure-node-latest)
+
+## More developer documentation for Node.js on Azure
+- [Azure for JavaScript & Node.js developers](https://docs.microsoft.com/azure/developer/javascript/?view=azure-node-latest)
+- [Media Services source code in the @azure/azure-sdk-for-js Git Hub repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)
+- [Azure Package Documentation for Node.js developers](https://docs.microsoft.com/javascript/api/overview/azure/?view=azure-node-latest)
+
+## Install the Packages
1. Create a package.json file using your favorite editor. 1. Open the file and paste the following code:
This article shows you how to connect to the Azure Media Services v3 node.js SDK
"name": "media-services-node-sample", "version": "0.1.0", "description": "",
- "main": "./index.js",
+ "main": "./index.ts",
"dependencies": {
- "azure-arm-mediaservices": "^8.0.0",
- "azure-storage": "^2.8.0",
- "ms-rest": "^2.3.3",
- "ms-rest-azure": "^2.5.5"
+ "@azure/arm-mediaservices": "^8.0.0",
+ "@azure/abort-controller": "^1.0.2",
+ "@azure/ms-rest-nodeauth": "^3.0.6",
+ "@azure/storage-blob": "^12.4.0",
} } ```
The following packages should be specified:
|Package|Description| |||
-|`azure-arm-mediaservices`|Azure Media Services SDK. <br/>To make sure you are using the latest Azure Media Services package, check [NPM install azure-arm-mediaservices](https://www.npmjs.com/package/azure-arm-mediaservices/).|
-|`azure-storage`|Storage SDK. Used when uploading files into assets.|
-|`ms-rest-azure`| Used to sign in.|
+|`@azure/arm-mediaservices`|Azure Media Services SDK. <br/>To make sure you are using the latest Azure Media Services package, check [npm install @azure/arm-mediaservices](https://www.npmjs.com/package/@azure/arm-mediaservices).|
+|`@azure/ms-rest-nodeauth` | Required for AAD authentication using Service Principal or Managed Identity|
+|`@azure/storage-blob`|Storage SDK. Used when uploading files into assets.|
+|`@azure/ms-rest-js`| Used to sign in.|
+|`@azure/storage-blob` | Used to upload and download files into Assets in Azure Media Services for encoding.|
+|`@azure/abort-controller`| Used along with the storage client to time out long running download operations|
+ You can run the following command to make sure you are using the latest package:
+### Install @azure/arm-mediaservices
```
-npm install azure-arm-mediaservices
+npm install @azure/arm-mediaservices
```
-## Connect to Node.js client
+### Install @azure/ms-rest-nodeauth
-1. Create a .js file using your favorite editor.
+Please install minimum version of "@azure/ms-rest-nodeauth": "^3.0.0".
+
+```
+npm install @azure/ms-rest-nodeauth@"^3.0.0"
+```
+
+## Connect to Node.js client using TypeScript
+
+1. Create a TypeScript .ts file using your favorite editor.
1. Open the file and paste the following code.
-1. Set the values in the "endpoint config" section to values you got from [access APIs](./access-api-howto.md).
-
-```js
-'use strict';
-
-const MediaServices = require('azure-arm-mediaservices');
-const msRestAzure = require('ms-rest-azure');
-const msRest = require('ms-rest');
-const azureStorage = require('azure-storage');
-
-// endpoint config
-// make sure your URL values end with '/'
-const armAadAudience = "";
-const aadEndpoint = "";
-const armEndpoint = "";
-const subscriptionId = "";
-const accountName = "";
-const region = "";
-const aadClientId = "";
-const aadSecret = "";
-const aadTenantId = "";
-const resourceGroup = "";
-
-let azureMediaServicesClient;
-
-///////////////////////////////////////////
-// Entrypoint for sample script //
-///////////////////////////////////////////
-
-msRestAzure.loginWithServicePrincipalSecret(aadClientId, aadSecret, aadTenantId, {
- environment: {
- activeDirectoryResourceId: armAadAudience,
- resourceManagerEndpointUrl: armEndpoint,
- activeDirectoryEndpointUrl: aadEndpoint
+1. Create an .env file and fill out the details from the Azure portal. See [access APIs](./access-api-howto.md).
+
+### Sample .env file
+```
+# copy the content of this file to a file named ".env". It should be stored at the root of the repo.
+# The values can be obtained from the API Access page for your Media Services account in the portal.
+AZURE_CLIENT_ID=""
+AZURE_CLIENT_SECRET= ""
+AZURE_TENANT_ID= ""
+
+# Change this to match your AAD Tenant domain name.
+AAD_TENANT_DOMAIN = "microsoft.onmicrosoft.com"
+
+# Set this to your Media Services Account name, resource group it is contained in, and location
+AZURE_MEDIA_ACCOUNT_NAME = ""
+AZURE_LOCATION= ""
+AZURE_RESOURCE_GROUP= ""
+
+# Set this to your Azure Subscription ID
+AZURE_SUBSCRIPTION_ID= ""
+
+# You must change these if you are using Gov Cloud, China, or other non standard cloud regions
+AZURE_ARM_AUDIENCE= "https://management.core.windows.net"
+AZURE_ARM_ENDPOINT="https://management.azure.com"
+
+# DRM Testing
+DRM_SYMMETRIC_KEY="add random base 64 encoded string here"
+```
+
+## Typescript - Hello World - List Assets
+This sample shows how to connect to the Media Services client with a Service Principal and list Assets in the account.
+If you are using a fresh account, the list will come back empty. You can upload a few assets in the portal to see the results.
+
+```ts
+import * as msRestNodeAuth from "@azure/ms-rest-nodeauth";
+import { AzureMediaServices } from '@azure/arm-mediaservices';
+import { AzureMediaServicesOptions } from "@azure/arm-mediaservices/esm/models";
+// Load the .env file if it exists
+import * as dotenv from "dotenv";
+dotenv.config();
+
+export async function main() {
+ // Copy the samples.env file and rename it to .env first, then populate it's values with the values obtained
+ // from your Media Services account's API Access page in the Azure portal.
+ const clientId = process.env.AZURE_CLIENT_ID as string;
+ const secret = process.env.AZURE_CLIENT_SECRET as string;
+ const tenantDomain = process.env.AAD_TENANT_DOMAIN as string;
+ const subscriptionId = process.env.AZURE_SUBSCRIPTION_ID as string;
+ const resourceGroup = process.env.AZURE_RESOURCE_GROUP as string;
+ const accountName = process.env.AZURE_MEDIA_ACCOUNT_NAME as string;
+
+ let clientOptions: AzureMediaServicesOptions = {
+ longRunningOperationRetryTimeout: 5, // set the time out for retries to 5 seconds
+ noRetryPolicy: false // use the default retry policy.
+ }
+
+ const creds = await msRestNodeAuth.loginWithServicePrincipalSecret(clientId, secret, tenantDomain);
+ const mediaClient = new AzureMediaServices(creds, subscriptionId, clientOptions);
+
+ // List Assets in Account
+ console.log("Listing Assets Names in account:")
+ var assets = await mediaClient.assets.list(resourceGroup, accountName);
+
+ assets.forEach(asset => {
+ console.log(asset.name);
+ });
+
+ if (assets.odatanextLink) {
+ console.log("There are more than 1000 assets in this account, use the assets.listNext() method to continue listing more assets if needed")
+ console.log("For example: assets = await mediaClient.assets.listNext(assets.odatanextLink)");
}
-}, async function(err, credentials, subscriptions) {
- if (err) return console.log(err);
- azureMediaServicesClient = new MediaServices(credentials, subscriptionId, armEndpoint, { noRetryPolicy: true });
-
- console.log("connected");
+}
+main().catch((err) => {
+ console.error("Error running sample:", err.message);
}); ```
-## Run your app
+## Run the sample application HelloWorld-ListAssets
-Open a command prompt. Browse to the sample's directory, and execute the following commands:
+Clone the repository for the Node.js Samples
+
+```git
+git clone https://github.com/Azure-Samples/media-services-v3-node-tutorials.git
+```
+
+Change directory into the AMSv3Samples folder
+```bash
+cd AMSv3Samples
+```
+Install the packages used in the packages.json
``` npm install
-node index.js
```
+Change directory into the HelloWorld-ListAssets folder
+```bash
+cd HelloWorld-ListAssets
+```
+
+Launch Visual Studio Code from the AMSv3Samples Folder. This is required to launch from the folder where the ".vscode" folder and tsconfig.json files are located
+```dotnetcli
+cd ..
+code .
+```
+
+Open the folder for HelloWorld-ListAssets, and open the index.ts file in the Visual Studio Code editor.
+While in the index.ts file, press F5 to launch the debugger. You should see a list of assets displayed if you have assets already in the account. If the account is empty, you will see an empty list. Upload a few assets in the portal to see the results.
+
+## More Samples
+
+The following samples are available in the [repository](https://github.com/Azure-Samples/media-services-v3-node-tutorials)
+
+|Project name|Use Case|
+|||
+|Live/index.ts| Basic live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|StreamFilesSample/index.ts| Basic example for uploading a local file or encoding from a source URL. Sample shows how to use storage SDK to download content, and shows how to stream to a player |
+|StreamFilesWithDRMSample/index.ts| Demonstrates how to encode and stream using Widevine and PlayReady DRM |
+|VideoIndexerSample/index.ts| Example of using the Video and Audio Analyzer presets to generate metadata and insights from a video or audio file |
+ ## See also - [Media Services concepts](concepts-overview.md)-- [NPM install azure-arm-mediaservices](https://www.npmjs.com/package/azure-arm-mediaservices/)
+- [npm install @azure/arm-mediaservices](https://www.npmjs.com/package/@azure/arm-mediaservices)
+- [Azure for JavaScript & Node.js developers](https://docs.microsoft.com/azure/developer/javascript/?view=azure-node-latest)
+- [Media Services source code in the @azure/azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)
## Next steps
media-services Content Aware Encoding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/content-aware-encoding.md
You should be aware of the content you are processing, and customize/tune the en
Microsoft's [Adaptive Streaming](autogen-bitrate-ladder.md) preset partially addresses the problem of the variability in the quality and resolution of the source videos. Our customers have a varying mix of content, some at 1080p, others at 720p, and a few at SD and lower resolutions. Furthermore, not all source content is high-quality mezzanines from film or TV studios. The Adaptive Streaming preset addresses these problems by ensuring that the bitrate ladder never exceeds the resolution or the average bitrate of the input mezzanine. However, this preset does not examine source properties other than resolution and bitrate.
-## The content-aware encoding
+## The content-aware encoding
The content-aware encoding preset extends the "adaptive bitrate streaming" mechanism, by incorporating custom logic that lets the encoder seek the optimal bitrate value for a given resolution, but without requiring extensive computational analysis. This preset produces a set of GOP-aligned MP4s. Given any input content, the service performs an initial lightweight analysis of the input content, and uses the results to determine the optimal number of layers, appropriate bitrate and resolution settings for delivery by adaptive streaming. This preset is particularly effective for low and medium complexity videos, where the output files will be at lower bitrates than the Adaptive Streaming preset but at a quality that still delivers a good experience to viewers. The output will contain MP4 files with video and audio interleaved
Below are the results for another category of source content, where the encoder
You can create transforms that use this preset as follows.
-See the [Next steps](#next-steps) section for tutorials that use tranform outputs. The output asset can be delivered from Media Services streaming endpoints in protocols such as MPEG-DASH and HLS (as shown in the tutorials).
+See the [Next steps](#next-steps) section for tutorials that use transform outputs. The output asset can be delivered from Media Services streaming endpoints in protocols such as MPEG-DASH and HLS (as shown in the tutorials).
> [!NOTE] > Make sure to use the **ContentAwareEncoding** preset not ContentAwareEncodingExperimental.
media-services Content Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/content-protection-overview.md
Last updated 08/31/2020
-#Customer intent: As a developer who works on subsystems of online streaming/multiscreen solutions that need to deliver protected content, I want to make sure that delivered content is protected with DRM or AES-128.
+ # Protect your content with Media Services dynamic encryption
media-services Create Streaming Locator Build Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/create-streaming-locator-build-url.md
In Azure Media Services, to build a streaming URL, you need to first create a [S
This article demonstrates how to create a streaming locator and build a streaming URL using Java and .NET SDKs.
-## Prerequisite
+## Prerequisite
Preview [Dynamic packaging](dynamic-packaging-overview.md)
media-services Custom Preset Cli Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/custom-preset-cli-howto.md
When creating custom presets, the following considerations apply:
Make sure to remember the resource group name and the Media Services account name. - ## Define a custom preset The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
media-services Customize Encoder Presets How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/customize-encoder-presets-how-to.md
When creating custom presets, the following considerations apply:
* All values for height and width on AVC content must be a multiple of 4. * In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-## Prerequisites
+## Prerequisites
[Create a Media Services account](./create-account-howto.md)
Clone a GitHub repository that contains the full .NET Core sample to your machin
The custom preset sample is located in the [EncodeCustomTransform](https://github.com/Azure-Samples/media-services-v3-dotnet-core-tutorials/blob/master/NETCore/EncodeCustomTransform/) folder.
-## Create a transform with a custom preset
+## Create a transform with a custom preset
When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
media-services Design Multi Drm System With Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/design-multi-drm-system-with-access-control.md
To make your selection, keep in mind:
* Widevine is natively implemented in every Android device, in Chrome, and in some other devices. Widevine is also supported in Firefox and Opera browsers over DASH. * FairPlay is available on iOS, macOS and tvOS. - ## A reference design+ This section presents a reference design that is agnostic to the technologies used to implement it. A DRM subsystem can contain the following components:
media-services Docs Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/docs-release-notes.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Azure Media Services v3 documentation updates
-description: To stay up-to-date with the most recent Media Services v3 documentation updates.
------- Previously updated : 08/31/2020---
-# Azure Media Services v3 documentation updates
--
->Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Azure+Media+Services+v3+documentation+updates%22&locale=en-us` into your RSS feed reader.
-
-This article talks about the most recent Media Services v3 documentation updates.
-
-## June 2020
-
-* The preview of Live Video Analytics on IoT Edge went public. See details in the [Live Video Analytics on IoT Edge](../live-video-analytics-edge/index.yml) documentation.
-* New quickstarts:
-
- * [Use portal to upload, encode, and stream content](manage-assets-quickstart.md)
- * [Use portal to encrypt content](encrypt-content-quickstart.md)
-
-## April 2020
-
-* Azure Media Player docs were migrated to the [Azure documentation](../azure-media-player/azure-media-player-overview.md).
-* The [Live streaming with Open Broadcasting Studio (OBS)](live-events-obs-quickstart.md) quickstart was added. It shows how to create a Media Services live stream by using the Azure portal and OBS.
-
-## March 2020
-
-The [Live streaming with Telestream Wirecast](live-events-wirecast-quickstart.md) quickstart was added. It shows you how to create an Azure Media Services live stream by using the Azure portal and Telestream Wirecast.
-
-## Next steps
--- [Overview](media-services-overview.md)-- [Media Services v3 release notes](release-notes.md)
media-services Dynamic Packaging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/dynamic-packaging-overview.md
ms.devlang: na
Last updated 09/30/2020
-#Customer intent: As a developer or content provider, I want to encode and stream on-demand or live content so my customers can view the content on a wide variety of clients (these clients understand different formats).
+ # Dynamic packaging in Media Services v3
media-services Filters Dynamic Manifest Cli Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/filters-dynamic-manifest-cli-howto.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-When delivering your content to customers (streaming Live events or Video on Demand), your client might need more flexibility than what's described in the default asset's manifest file. Azure Media Services enables you to define account filters and asset filters for your content.
+When delivering your content to customers (streaming Live events or Video on Demand), your client might need more flexibility than what's described in the default asset's manifest file. Azure Media Services enables you to define account filters and asset filters for your content.
For detailed description of this feature and scenarios where it is used, see [Dynamic Manifests](filters-dynamic-manifest-overview.md) and [Filters](filters-concept.md).
-This topic shows how to configure a filter for a Video on-Demand asset and use CLI for Media Services v3 to create [Account Filters](/cli/azure/ams/account-filter?view=azure-cli-latest) and [Asset Filters](/cli/azure/ams/asset-filter?view=azure-cli-latest).
+This topic shows how to configure a filter for a Video on-Demand asset and use CLI for Media Services v3 to create [Account Filters](/cli/azure/ams/account-filter?view=azure-cli-latest) and [Asset Filters](/cli/azure/ams/asset-filter?view=azure-cli-latest).
> [!NOTE] > Make sure to review [presentationTimeRange](filters-concept.md#presentationtimerange).
-## Prerequisites
+## Prerequisites
-- [Create a Media Services account](./create-account-howto.md). Make sure to remember the resource group name and the Media Services account name.
+- [Create a Media Services account](./create-account-howto.md). Make sure to remember the resource group name and the Media Services account name.
-
-## Define a filter
+## Define a filter
The following example defines the track selection conditions that are added to the final manifest. This filter includes any audio tracks that are EC-3 and any video tracks that have bitrate in the 0-1000000 range.
The following example defines the track selection conditions that are added to t
## Create account filters
-The following [az ams account-filter](/cli/azure/ams/account-filter?view=azure-cli-latest) command creates an account filter with filter track selections that were [defined earlier](#define-a-filter).
+The following [az ams account-filter](/cli/azure/ams/account-filter?view=azure-cli-latest) command creates an account filter with filter track selections that were [defined earlier](#define-a-filter).
The command allows you to pass an optional `--tracks` parameter that contains JSON representing the track selections. Use @{file} to load JSON from a file. If you are using the Azure CLI locally, specify the whole file path:
The following table shows some examples of URLs with filters:
## Next step
-[Stream videos](stream-files-tutorial-with-api.md)
+[Stream videos](stream-files-tutorial-with-api.md)
## See also
media-services Filters Dynamic Manifest Dotnet Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/filters-dynamic-manifest-dotnet-howto.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-When delivering your content to customers (streaming Live events or Video on Demand) your client might need more flexibility than what's described in the default asset's manifest file. Azure Media Services enables you to define account filters and asset filters for your content.
+When delivering your content to customers (streaming Live events or Video on Demand) your client might need more flexibility than what's described in the default asset's manifest file. Azure Media Services enables you to define account filters and asset filters for your content.
For detailed description of this feature and scenarios where it is used, see [Dynamic Manifests](filters-dynamic-manifest-overview.md) and [Filters](filters-concept.md).
media-services How To Create Basic Audio Transform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-basic-audio-transform.md
Follow the steps in [Create a Media Services account](./create-account-howto.md)
[!INCLUDE [media-services-cli-instructions.md](./includes/task-create-basic-audio-rest.md)] - ## Next steps
-[Media Services Overview](media-services-overview.md)
media-services How To Create Copy Video Audio Transform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-copy-video-audio-transform.md
Follow the steps in [Create a Media Services account](./create-account-howto.md)
[!INCLUDE [task-create-copy-video-audio-rest.md](./includes/task-create-copy-video-audio-rest.md)] - ## Next steps
-[Media Services Overview](media-services-overview.md)
media-services How To Create Copyallbitratenoninterleaved Transform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-copyallbitratenoninterleaved-transform.md
Follow the steps in [Create a Media Services account](./create-account-howto.md)
[!INCLUDE [task-create-copyallbitratenoninterleaved.md](./includes/task-create-copyallbitratenoninterleaved.md)] - ## Next steps
-[Media Services Overview](media-services-overview.md)
media-services How To Create Overlay https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-overlay.md
Download the [media-services-overlay sample](https://github.com/Azure-Samples/me
## Next steps
-* [Subclip a video when encoding with Media Services - .NET](subclip-video-dotnet-howto.md)
media-services How To Create Thumbnail Sprites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-thumbnail-sprites.md
+
+ Title: Create a thumbnail sprites transform
+description: How do I create thumbnail sprites? You can create a transform for a job that will generate thumbnail sprites for your videos. This article shows you how.
+
+documentationcenter: ''
++
+editor:
+
+ms.assetid:
+
+ms.devlang: multiple
+
+ multiple
+ Last updated : 2/17/2021+++
+# Create a thumbnail sprite transform
++
+How do I create thumbnail sprites? You can create a transform for a job that will generate thumbnail sprites for your videos. This article shows you how with the Media Services 2020-05-01 v3 API.
+
+Add the code snippets for your preferred development language.
+
+## [REST](#tab/rest/)
++
+## [.NET](#tab/dotnet/)
++
+See also thumbnail sprite creation in a [complete encoding sample](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/master/VideoEncoding/EncodingWithMESCustomPresetAndSprite/Program.cs#L261-L287) at Azure Samples.
+++
+## Next steps
+
media-services How To Create Transform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-create-transform.md
The Azure CLI script in this article shows how to create a transform. Transforms
## [CLI](#tab/cli/) - > [!NOTE] > You can only specify a path to a custom Standard Encoder preset JSON file for [StandardEncoderPreset](/rest/api/medi) example. >
The Azure CLI script in this article shows how to create a transform. Transforms
## Next steps
-[More about transforms and jobs](transforms-jobs-concept.md)
media-services How To Shaka Player https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-shaka-player.md
We recommend using [Mux.js](https://github.com/videojs/mux.js/) as, without it,
Its official documentation can be found at [Shaka player documentation](https://shaka-player-demo.appspot.com/docs/api/tutorial-welcome.html). ## Sample code+ Sample code for this article is available at [Azure-Samples/media-services-3rdparty-player-samples](https://github.com/Azure-Samples/media-services-3rdparty-player-samples). ## Implementing the player
media-services How To Upload Media https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/how-to-upload-media.md
Before you get started though, you'll need to collect or think about a few value
## [CLI](#tab/cli/)
-## [REST](#tab/rest/)
+## [Python](#tab/python)
-Once you have [created an asset using Postman or other REST method and gotten the SAS URL for the asset](how-to-create-asset.md?tabs=rest), use the Azure Storage APIs or SDKs (for example, the [Storage REST API](../../storage/common/storage-rest-api-auth.md) or [.NET SDK](../../storage/blobs/storage-quickstart-blobs-dotnet.md).
+Assuming that your code has already established authentication and you have already created an input Asset, use the following code snippet to upload local files to that asset (in_container).
+
+```python
+#The storage objects
+from azure.storage.blob import BlobServiceClient, BlobClient
+
+#Establish storage variables
+storage_account_name = '<your storage account name'
+storage_account_key = '<your storage account key'
+storage_blob_url = 'https://<your storage account name>.blob.core.windows.net/'
+
+in_container = 'asset-' + inputAsset.asset_id
+
+#The file path of local file you want to upload
+source_file = "ignite.mp4"
+
+# Use the Storage SDK to upload the video
+blob_service_client = BlobServiceClient(account_url=storage_blob_url, credential=storage_account_key)
+blob_client = blob_service_client.get_blob_client(in_container,source_file)
+
+# Upload the video to storage as a block blob
+with open(source_file, "rb") as data:
+ blob_client.upload_blob(data, blob_type="BlockBlob")
+```
<!-- add these to the tabs when available -->
For other methods see the [Azure Storage documentation](../../storage/blobs/inde
## Next steps
-> [Media Services overview](media-services-overview.md)
+> [Media Services overview](media-services-overview.md)
media-services Job Input From Local File How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-input-from-local-file-how-to.md
In Media Services v3, when you submit Jobs to process your videos, you have to tell Media Services where to find the input video. The input video can be stored as a Media Service Asset, in which case you create an input asset based on a file (stored locally or in Azure Blob storage). This topic shows how to create a job input from a local file. For a full example, see this [GitHub sample](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/master/AMSV3Tutorials/UploadEncodeAndStreamFiles/Program.cs).
-## Prerequisites
+## Prerequisites
* [Create a Media Services account](./create-account-howto.md).
media-services Job Multiple Transform Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-multiple-transform-outputs.md
private static async Task<Transform> GetOrCreateTransformAsync(
return transform; } ```+ ## Submit a job Create a job with an HTTPS URL input and with two job outputs.
media-services Job State Events Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/job-state-events-cli-how-to.md
In this article, you use the Azure CLI to subscribe to events for your Azure Med
## Prerequisites - An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.-- Install and use the CLI locally, this article requires the Azure CLI version 2.0 or later. Run `az --version` to find the version you have. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+- Install and use the CLI locally, this article requires the Azure CLI version 2.0 or later. Run `az --version` to find the version you have. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
Currently, not all [Media Services v3 CLI](/cli/azure/ams) commands work in the Azure Cloud Shell. It is recommended to use the CLI locally.
media-services Live Event Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-event-error-codes.md
When you subscribe to the [Event Grid](../../event-grid/index.yml) events for a
live event, you may see one of the following errors from the [LiveEventConnectionRejected](media-services-event-schemas.md\#liveeventconnectionrejected) event.+ > [!div class="mx-tdCol2BreakAll"] >| Error | Information | >|--|--| >|**MPE_RTMP_APPID_AUTH_FAILURE** ||
->|Description | Incorrect ingest URL |
->|Suggested solution| APPID is a GUID token in RTMP ingest URL. Make sure it matches with Ingest URL from API. |
+>|Description | Incorrect ingest URL. |
+>|Suggested solution| APPID is a GUID token in the RTMP ingest URL. Make sure that it matches the URL from one of the live event's input endpoint URLs. |
>|**MPE_INGEST_ENCODER_CONNECTION_DENIED** ||
->| Description |Encoder IP isn't present in the configured IP allow list |
->| Suggested solution| Make sure the encoder's IP is in the IP Allow List. Use an online tool such as *whoismyip* or *CIDR calculator* to set the proper value. Make sure the encoder can reach the server before the actual live event. |
+>| Description |Encoder IP isn't present in the configured IP allowlist. |
+>| Suggested solution| Make sure that the encoder's IP address is in the IP allowlist specified in the live event's input access control property. Use an online tool such as *whoismyip* or *CIDR calculator* to set the proper value. Before the live event, make sure that the encoder can reach the server. |
>|**MPE_INGEST_RTMP_SETDATAFRAME_NOT_RECEIVED** ||
->| Description|The RTMP encoder did not send the `setDataFrame` command. |
->| Suggested solution|Most commercial encoders send stream metadata. For an encoder that pushes a single bitrate ingest, this may not be issue. The LiveEvent is able to calculate incoming bitrate when the stream metadata is missing. For multi-bitrate ingest for a PassThru channel or double push scenario, you can try to append the query string with 'videodatarate' and 'audiodatarate' in the ingest URL. The approximate value may work. The unit is in Kbit. For example, `rtmp://hostname:1935/live/GUID_APPID/streamname?videodatarate=5000&audiodatarate=192` |
+>| Description|The RTMP encoder didn't send the `setDataFrame` command. |
+>| Suggested solution|Most commercial encoders send stream metadata. For an encoder that pushes a single bitrate ingest, this may not be issue. The live event is able to calculate incoming bitrate when the stream metadata is missing. For multi-bitrate ingest for a pass-through live event or double push scenario, you can try to append the query string with `videodatarate` and `audiodatarate` in the ingest URL. The approximate value may work. The unit is in Kbit. For example, `rtmp://hostname:1935/live/GUID_APPID/streamname?videodatarate=5000&audiodatarate=192` |
>|**MPE_INGEST_CODEC_NOT_SUPPORTED** || >| Description|The codec specified isn't supported.|
->| Suggested solution| The LiveEvent received unsupported codec. For example, an RTMP ingest, LiveEvent received non-AVC video codec. Check encoder preset. |
+>| Suggested solution| The live event received an unsupported codec. For a list of supported codecs, see [live event supported codecs](live-event-types-comparison.md). |
>|**MPE_INGEST_DESCRIPTION_INFO_NOT_RECEIVED** ||
->| Description |The media description information was not received before the actual media data was delivered. |
->| Suggested solution|The LiveEvent does not receive the stream description (header or FLV tag) from the encoder. This is a protocol violation. Contact encoder vendor. |
+>| Description |The media description information wasn't received before the actual media data was delivered. |
+>| Suggested solution|The live event didnt receive the stream description (header or FLV tag) from the encoder. This is a protocol violation. To resolve, contact the encoder vendor. |
>|**MPE_INGEST_MEDIA_QUALITIES_EXCEEDED** ||
->| Description|The count of qualities for audio or video type exceeded the maximum allowed limit. |
+>| Description|The count of qualities for the audio or video type exceeded the maximum allowed limit. |
>| Suggested solution|When Live Event mode is Live Encoding, the encoder should push a single bitrate of video and audio. Note that a redundant push from the same bitrate is allowed. Check the encoder preset or output settings to make sure it outputs a single bitrate stream. | >|**MPE_INGEST_BITRATE_AGGREGATED_EXCEEDED** || >| Description|The total incoming bitrate in a live event or channel service exceeded the maximum allowed limit. |
->| Suggested solution|The encoder exceeded the maximum incoming bitrate. This limit aggregates all incoming data from the contributing encoder. Check encoder preset or output settings to reduce bitrate. |
->|**MPE_RTMP_FLV_TAG_TIMESTAMP_INVALID** ||
->| Description|The timestamp for video or audio FLVTag is invalid from the RTMP encoder. |
->| Suggested solution|Deprecated. |
+>| Suggested solution|The encoder exceeded the maximum incoming bitrate. This limit aggregates all incoming data from the contributing encoder. Check the encoder preset or output settings to reduce bitrate. |
>|**MPE_INGEST_FRAMERATE_EXCEEDED** || >| Description|The incoming encoder ingested streams with frame rates exceeded the maximum allowed 30 fps for encoding live events/channels. |
->| Suggested solution|Check encoder preset to lower frame rate to under 36 fps. |
+>| Suggested solution| Check the contribution encoder's settings and lower the frame rate to under 36 fps. |
>|**MPE_INGEST_VIDEO_RESOLUTION_NOT_SUPPORTED** ||
->| Description|The incoming encoder ingested streams exceeded the following allowed resolutions: 1920x1088 for encoding live events/channels and 4096 x 2160 for pass-through live events/channels. |
->| Suggested solution|Check encoder preset to lower video resolution so it doesn't exceed the limit. |
+>| Description|The incoming encoder ingested streams exceeded the following allowed resolutions: 1920 x 1088 for encoding live events/channels and 4096 x 2160 for pass-through live events/channels. |
+>| Suggested solution|Check the encoder preset to lower video resolution so it doesn't exceed the limit. |
>|**MPE_INGEST_RTMP_TOO_LARGE_UNPROCESSED_FLV** |
->| Description|The live event has received a large amount of audio data at once, or a large amount of video data without any key frames. We have disconnected the encoder to give it a chance to retry with correct data. |
->| Suggested solution|Ensure that the encoder sends a key frame for every key frame interval(GOP). Enable settings like "Constant bitrate(CBR)" or "Align Key Frames". Sometimes, resetting the contributing encoder may help. If it doesn't help, contact encoder vendor. |
+>| Description|The live event has received a large amount of audio data at once, or it received a large amount of video data without any key frames. We have disconnected the encoder to give it a chance to retry with correct data. |
+>| Suggested solution|Ensure that the encoder sends a key frame for every key frame interval (GOP). Enable settings like "Constant bitrate(CBR)" or "Align Key Frames". Resetting the contributing encoder may help. |
## LiveEventEncoderDisconnected
event.
>|--|--| >|**MPE_RTMP_SESSION_IDLE_TIMEOUT** | >| Description|RTMP session timed out after being idle for allowed time limit. |
->|Suggested solution|This typically happens when an encoder stops receiving the input feed so that the session becomes idle because there is no data to push out. Check if the encoder or input feed status is in a healthy state. |
->|**MPE_RTMP_FLV_TAG_TIMESTAMP_INVALID** |
->|Description| The timestamp for the video or audio FLVTag is invalid from RTMP encoder. |
->| Suggested solution| Deprecated. |
+>|Suggested solution|This typically happens when an encoder stops receiving the input feed. The session becomes idle because there is no data to push out. Check the encoder or input feed status to see if it's in a healthy state. |
>|**MPE_CAPACITY_LIMIT_REACHED** |
->| Description|Encoder sending data too fast. |
->| Suggested solution|This happens when the encoder bursts out a large set of fragments in a brief period. This can theoretically happen when the encoder can't push data for while due to a network issue and the bursts out data when the network is available. Find the reason from encoder log or system log. |
->|**Unknown error codes** |
->| Description| These error codes can range from memory error to duplicate entries in hash map. |
+>| Description|Encoder is sending too much data at once. |
+>| Suggested solution|This happens when the encoder sends out a large set of fragments in a brief period. This could happen when the encoder couldn't push data due to a network issue and then sends all the delayed fragments at once when the network becomes available. For details, check the encoder logs. |
+ ## Other error codes > [!div class="mx-tdCol2BreakAll"]
->| Error | Information |Rejected/Disconnected Event|
+>| Error | Information |Encoder disconnected/rejected|
>|--|--|--| >|**ERROR_END_OF_MEDIA** ||Yes|
->| Description|This is general error. ||
+>| Description|This is a general error. ||
>|Suggested solution| None.|| >|**MPI_SYSTEM_MAINTENANCE** ||Yes|
->| Description|The encoder disconnected due to service update or system maintenance. ||
->|Suggested solution|Make sure encoder enables 'auto connect'. This is encoder feature to recover the unexpected session disconnection. ||
+>| Description|The encoder disconnected due to a service update or system maintenance. ||
+>|Suggested solution|Make sure the 'auto connect' feature is enabled for the contribution encoder. It allows the encoder to reconnect to the redundant live event endpoint that is not in maintenance. ||
>|**MPE_BAD_URL_SYNTAX** ||Yes| >| Description|The ingest URL is incorrectly formatted. ||
->|Suggested solution|Make sure the ingest URL is correctly formatted. For RTMP, it should be `rtmp[s]://hostname:port/live/GUID_APPID/streamname` ||
+>|Suggested solution|Make sure that the ingest URL is correctly formatted. Refer to the live event's input endpoint URLs on the API. ||
>|**MPE_CLIENT_TERMINATED_SESSION** ||Yes| >| Description|The encoder disconnected the session. ||
->|Suggested solution|This is not error. This is the case where encoder initiated disconnection, including graceful disconnection. If this is an unexpected disconnect, check the encoder log or system log. |
+>|Suggested solution|This isn't an error. The encoder initiated the disconnection, which might indicate a graceful shutdown. If this is an unexpected disconnection, check the encoder logs. |
>|**MPE_INGEST_BITRATE_NOT_MATCH** ||No|
->| Description|The incoming data rate does not match with expected bitrate. ||
->|Suggested solution|This is a warning which happens when incoming data rate is too slow or fast. Check encoder log or system log.||
+>| Description|The incoming data rate doesn't match the expected bitrate. ||
+>|Suggested solution|This is a warning that happens when the incoming data rate is too slow or too fast. Check the encoder logs.||
>|**MPE_INGEST_DISCONTINUITY** ||No| >| Description| There is discontinuty in incoming data.||
->|Suggested solution| This is a warning that the encoder drops data due to a network issue or a system resource issue. Check the encoder log or system log. Monitor the system resource (CPU, memory or network) as well. If the system CPU is too high, try to lower the bitrate or use the H/W encoder option from the system graphics card.||
+>|Suggested solution| This is a warning that the encoder drops data due to a network issue or a system resource issue. Check the encoder log or system log. Monitor the system resource (CPU, memory or network) as well. If the system CPU is too high, try to lower the bitrate or use the GPU encoder option from the system graphics card.||
## See also
media-services Media Reserved Units Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-reserved-units-cli-how-to.md
This article shows you how to scale Media Reserved Units (MRSs) for faster encod
Understand [Media Reserved Units](concept-media-reserved-units.md). - ## Scale Media Reserved Units with CLI Run the `mru` command.
media-services Media Services Generate Thumbnails Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-services-generate-thumbnails-dotnet.md
private static Transform EnsureTransformExists(IAzureMediaServicesClient client,
``` ## Next steps+ [Generate thumbnails using REST](media-services-generate-thumbnails-rest.md)
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/release-notes.md
Check out the [Azure Media Services community](media-services-community.md) arti
## Next steps - [Overview](media-services-overview.md)-- [Media Services v3 Documentation updates](docs-release-notes.md) - [Media Services v2 release notes](../previous/media-services-release-notes.md)
media-services Stream Files Nodejs Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-files-nodejs-quickstart.md
Previously updated : 08/31/2020 Last updated : 02/17/2021 #Customer intent: As a developer, I want to create a Media Services account so that I can store, encrypt, encode, manage, and stream media content in Azure.
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This tutorial shows you how easy it is to encode and start streaming videos on a wide variety of browsers and devices using Azure Media Services. An input content can be specified using HTTPS URLs, SAS URLs, or paths to files located in Azure Blob storage.
+This tutorial shows you how easy it is to encode and start streaming videos on a wide variety of browsers and devices using Azure Media Services. An input video file can be specified using HTTPS URLs, SAS URLs, or paths to files located in Azure Blob storage.
The sample in this article encodes content that you make accessible via an HTTPS URL. Note that currently, AMS v3 does not support chunked transfer encoding over HTTPS URLs.
-By the end of the tutorial you will be able to stream a video.
+By the end of the tutorial you will know how to upload, encode and stream a video using an HLS or DASH client player.
![Play the video](./media/stream-files-nodejs-quickstart/final-video.png)
By the end of the tutorial you will be able to stream a video.
- Install [Node.js](https://nodejs.org/en/download/) - [Create a Media Services account](./create-account-howto.md).<br/>Make sure to remember the values that you used for the resource group name and Media Services account name. - Follow the steps in [Access Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You will need to use them to access the API.
+- Walk through the [Configure and Connect with Node.js](./configure-connect-nodejs-howto.md) how-to first to understand how to use the Node.js client SDK
## Download and configure the sample
Clone a GitHub repository that contains the streaming Node.js sample to your mac
The sample is located in the [StreamFilesSample](https://github.com/Azure-Samples/media-services-v3-node-tutorials/tree/master/AMSv3Samples/StreamFilesSample) folder.
-Open [index.js](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/master/AMSv3Samples/StreamFilesSample/index.js#L25) in you downloaded project. Replace the `endpoint config` values with credentials that you got from [accessing APIs](./access-api-howto.md).
+Open [index.ts](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/master/AMSv3Samples/StreamFilesSample/index.ts) in you downloaded project. Update the sample.env file in the root folder with the values and credentials that you got from [accessing APIs](./access-api-howto.md). Rename the sample.env file to .env.
The sample performs the following actions: 1. Creates a **Transform** (first, checks if the specified Transform exists). 2. Creates an output **Asset** that is used as the encoding **Job**'s output.
-3. Creates the **Job**'s input that is based on an HTTPS URL.
-4. Submits the encoding **Job** using the input and output that was created earlier.
-5. Checks the Job's status.
-6. Creates a **Streaming Locator**.
-7. Builds streaming URLs.
+1. Optionally uploads a local file using the Storage blob SDK.
+1. Creates the **Job**'s input that is based on an HTTPS URL or uploaded file
+1. Submits the encoding **Job** with the [Content Aware Encoding preset](./content-aware-encoding.md), using the input and output that was created earlier.
+1. Checks the Job's status.
+1. Downloads the output of the encoding job to a local folder.
+1. Creates a **Streaming Locator** to use in the player.
+1. Builds streaming URLs for HLS and DASH.
+1. Plays the content back in a player application - Azure Media Player.
## Run the sample app
-1. The app downloads encoded files. Create a folder where you want for the output files to go and update the value of the **outputFolder** variable in the [index.js](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/master/AMSv3Samples/StreamFilesSample/index.js#L39) file.
+1. The app downloads encoded files. Create a folder where you want for the output files to go and update the value of the **outputFolder** variable in the [index.ts](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/master/AMSv3Samples/StreamFilesSample/index.js#L59) file. It is set to "Temp" by default.
1. Open **command prompt**, browse to the sample's directory, and execute the following commands.-
+1. Change directory into the AMSv3Samples folder
+ ```bash
+ cd AMSv3Samples
```+
+1. Install the packages used in the packages.json
+ ```bash
npm install
- node index.js
```
-After it is done running, you should see similar output:
+1. Change directory into the StreamFilesSample folder
+ ```bash
+ cd StreamFilesSample
+ ```
+
+1. Launch Visual Studio Code from the AMSv3Samples Folder. This is required to launch from the folder where the ".vscode" folder and tsconfig.json files are located
+
+ ```bash
+ cd ..
+ code .
+ ```
+
+Open the folder for StreamFilesSample, and open the index.ts file in the Visual Studio Code editor.
+While in the index.ts file, press F5 to launch the debugger.
-![Screenshot of a command window with output from the StreamFileSample sample app showing the URLs of three files downloaded to the local directory.](./media/stream-files-nodejs-quickstart/run.png)
## Test with Azure Media Player
-To test the stream, this article uses Azure Media Player.
+To test the stream, this article uses Azure Media Player. You can also use any HLS or DASH compliant player, like Shaka player, HLS.js, Dash.js, or others.
+
+You should be able to click on the link generating in the sample and launch the AMP player with the DASH manifest already loaded.
> [!NOTE] > If a player is hosted on an https site, make sure to update the URL to "https".
Execute the following CLI command:
az group delete --name amsResourceGroup ```
+## More developer documentation for Node.js on Azure
+- [Azure for JavaScript & Node.js developers](https://docs.microsoft.com/azure/developer/javascript/?view=azure-node-latest)
+- [Media Services source code in the @azure/azure-sdk-for-js Git Hub repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)
+- [Azure Package Documentation for Node.js developers](https://docs.microsoft.com/javascript/api/overview/azure/?view=azure-node-latest)
+ ## See also
-[Job error codes](/rest/api/media/jobs/get#joberrorcode).
+- [Job error codes](/rest/api/media/jobs/get#joberrorcode).
+- [npm install @azure/arm-mediaservices](https://www.npmjs.com/package/@azure/arm-mediaservices)
+- [Azure for JavaScript & Node.js developers](https://docs.microsoft.com/azure/developer/javascript/?view=azure-node-latest)
+- [Media Services source code in the @azure/azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/mediaservices/arm-mediaservices)
## Next steps
migrate Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/security-baseline.md
Use built-in roles to allocate permission and only create custom role when requi
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-high-availability.md
Title: High availability - Azure Database for MySQL description: This article provides information on high availability in Azure Database for MySQL--++ Last updated 7/7/2020
Azure Database for MySQL is suitable for running mission critical databases that
| **Component** | **Description**| | | -- |
-| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
+| <b>MySQL Database Server | Azure Database for MySQL provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in 60-120 seconds depending on the transactional activity on the database. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://dev.mysql.com/doc/refman/5.7/en/innodb-checkpoints.html) process, data pages from the database server memory are also flushed to the storage. |
+| <b>Remote Storage | All MySQL physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within 60 seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | ## Planned downtime mitigation
Here are some planned maintenance scenarios:
| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.| | <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.| | <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| <b>Minor version upgrades | Azure Database for MySQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. During planned maintenance, there can be database server restarts or failovers, which might lead to brief unavailability of the database servers for end users. Azure Database for MySQL servers are running in containers so database server restarts are typically quick, expected to complete typically in 60-120 seconds. The entire planned maintenance event including each server restarts is carefully monitored by the engineering team. The server failovers time is dependent on database recovery time, which can cause the database to come online longer if you have heavy transactional activity on the server at the time of failover. To avoid longer restart time, it is recommended to avoid any long running transactions (bulk loads) during planned maintenance events. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
## Unplanned downtime mitigation
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in 60-120 seconds. The remote storage is automatically attached to the new database server. MySQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MySQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
:::image type="content" source="./media/concepts-high-availability/availability-for-mysql-server.png" alt-text="view of High Availability in Azure MySQL":::
Azure Database for MySQL provides fast restart capability of database servers,
## Next steps - Learn about [Azure regions](../availability-zones/az-overview.md) - Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
+- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
mysql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-connect-with-managed-identity.md
You learn how to:
## Prerequisites - If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Use Azure role-based access control (Azure RBAC) to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).
- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity - You need an Azure Database for MySQL database server that has [Azure AD authentication](howto-configure-sign-in-azure-ad-authentication.md) configured - To follow the C# example, first complete the guide how to [Connect using C#](connect-csharp.md)
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics.md
You can use traffic analytics for NSGs in any of the following supported regions
Central US China East 2 China North 2
+ East Asia
:::column-end::: :::column span="":::
- East Asia
East US East US 2 East US 2 EUAP France Central
- Japan East
+ Germany West Central
+ Japan East
Japan West Korea Central Korea South North Central US
+ North Europe
:::column-end::: :::column span="":::
- North Europe
South Africa North South Central US South India Southeast Asia Switzerland North Switzerland West
- UK South
- UK West
+ UAE North
+ UK South
+ UK West
USGov Arizona
+ USGov Texas
:::column-end::: :::column span="":::
- USGov Texas
USGov Virginia USNat East USNat West
The Log Analytics workspace must exist in the following regions:
Switzerland North Switzerland West UAE Central
- UK South
- UK West
+ UAE North
+ UK South
+ UK West
USGov Arizona USGov Virginia
- USNat East
- USNat West
+ USNat East
:::column-end::: :::column span="":::
- USSec East
+ USNat West
+ USSec East
USSec West West Central US West Europe
postgresql Howto Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-connect-with-managed-identity.md
You learn how to:
## Prerequisites - If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Use Azure role-based access control (Azure RBAC) to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../articles/role-based-access-control/role-assignments-portal.md).
- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity - You need an Azure Database for PostgreSQL database server that has [Azure AD authentication](howto-configure-sign-in-aad-authentication.md) configured - To follow the C# example, first complete the guide how to [Connect with C#](connect-csharp.md)
role-based-access-control Custom Roles Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/custom-roles-template.md
- Last updated 12/16/2020
search Search Performance Optimization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-performance-optimization.md
Azure Cognitive Search currently supports Availability Zones for Standard tier o
+ Australia East (created January 30, 2021 or later) + Canada Central (created January 30, 2021 or later) + Central US (created December 4, 2020 or later)++ East US (created January 27, 2021 or later) + East US 2 (created January 30, 2021 or later) + France Central (created October 23, 2020 or later) + Japan East (created January 30, 2021 or later)
search Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/security-baseline.md
Microsoft manages the underlying platform and treats all customer content as sen
**Guidance**: For service administration, use Azure role-based access control (Azure RBAC) to manage access to keys and configuration. For content operations, such as indexing and queries, Cognitive Search uses keys instead of an identity-based access control model. Use Azure RBAC to control access to keys.-- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use roles for administrative access to Cognitive Search](./search-security-rbac.md)
security Threat Modeling Tool Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/threat-modeling-tool-authorization.md
Please note that RLS as an out-of-the-box database feature is applicable only to
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Add or remove Azure role assignments to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md) |
+| **References** | [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md) |
| **Steps** | Azure role-based access control (Azure RBAC) enables fine-grained access management for Azure. Using Azure RBAC, you can grant only the amount of access that users need to perform their jobs.| ## <a id="cluster-rbac"></a>Restrict client's access to cluster operations using Service Fabric RBAC
security Paas Applications Using Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/paas-applications-using-storage.md
Organizations that do not enforce data access control by using capabilities such
To learn more about Azure RBAC see: -- [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
- [Azure built-in roles](../../role-based-access-control/built-in-roles.md) - [Azure Storage security guide](../../storage/blobs/security-recommendations.md)
service-bus-messaging Service Bus Messages Payloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messages-payloads.md
Last updated 01/29/2021
# Messages, payloads, and serialization
-Microsoft Azure Service Bus handles messages. Messages carry a payload as well as metadata, in the form of key-value pair properties, describing the payload and giving handling instructions to Service Bus and applications. Occasionally, that metadata alone is sufficient to carry the information that the sender wants to communicate to receivers, and the payload remains empty.
+Microsoft Azure Service Bus handles messages. Messages carry a payload and metadata. The metadata is in the form of key-value pair properties, and describes the payload, and gives handling instructions to Service Bus and applications. Occasionally, that metadata alone is sufficient to carry the information that the sender wants to communicate to receivers, and the payload remains empty.
The object model of the official Service Bus clients for .NET and Java reflect the abstract Service Bus message structure, which is mapped to and from the wire protocols Service Bus supports.
The equivalent names used at the AMQP protocol level are listed in parentheses.
| Property Name | Description | ||-|
-| [ContentType](/dotnet/api/microsoft.azure.servicebus.message.contenttype) (content-type) | Optionally describes the payload of the message, with a descriptor following the format of RFC2045, Section 5; for example, `application/json`. |
-| [CorrelationId](/dotnet/api/microsoft.azure.servicebus.message.correlationid#Microsoft_Azure_ServiceBus_Message_CorrelationId) (correlation-id) | Enables an application to specify a context for the message for the purposes of correlation; for example, reflecting the **MessageId** of a message that is being replied to. |
-| [DeadLetterSource](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) | Only set in messages that have been dead-lettered and subsequently auto-forwarded from the dead-letter queue to another entity. Indicates the entity in which the message was dead-lettered. This property is read-only. |
-| [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) | Number of deliveries that have been attempted for this message. The count is incremented when a message lock expires, or the message is explicitly abandoned by the receiver. This property is read-only. |
-| [EnqueuedSequenceNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedsequencenumber) | For messages that have been auto-forwarded, this property reflects the sequence number that had first been assigned to the message at its original point of submission. This property is read-only. |
-| [EnqueuedTimeUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedtimeutc) | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver does not want to trust the sender's clock. This property is read-only. |
-| [ExpiresΓÇïAtUtc](/dotnet/api/microsoft.azure.servicebus.message.expiresatutc) (absolute-expiry-time) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity due to its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. |
-| [ForceΓÇïPersistence](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.forcepersistence) | For queues or topics that have the [EnableExpress](/dotnet/api/microsoft.servicebus.messaging.queuedescription.enableexpress) flag set, this property can be set to indicate that the message must be persisted to disk before it is acknowledged. This is the standard behavior for all non-express entities. |
-| [Label](/dotnet/api/microsoft.azure.servicebus.message.label) (subject) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
-| [LockedΓÇïUntilΓÇïUtc](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.lockeduntilutc) | For messages retrieved under a lock (peek-lock receive mode, not pre-settled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
-| [LockΓÇïToken](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.locktoken) | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. |
-| [MessageΓÇïId](/dotnet/api/microsoft.azure.servicebus.message.messageid) (message-id) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. |
-| [PartitionΓÇïKey](/dotnet/api/microsoft.azure.servicebus.message.partitionkey) | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and cannot be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
-| [ReplyΓÇïTo](/dotnet/api/microsoft.azure.servicebus.message.replyto) (reply-to) | This optional and application-defined value is a standard way to express a reply path to the receiver of the message. When a sender expects a reply, it sets the value to the absolute or relative path of the queue or topic it expects the reply to be sent to. |
-| [ReplyΓÇïToΓÇïSessionΓÇïId](/dotnet/api/microsoft.azure.servicebus.message.replytosessionid) (reply-to-group-id) | This value augments the **ReplyTo** information and specifies which **SessionId** should be set for the reply when sent to the reply entity. |
-| [ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc](/dotnet/api/microsoft.azure.servicebus.message.scheduledenqueuetimeutc) | For messages that are only made available for retrieval after a delay, this property defines the UTC instant at which the message will be logically enqueued, sequenced, and therefore made available for retrieval. |
-| [SequenceΓÇïNumber](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) | The sequence number is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its true identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers monotonically increase and are gapless. They roll over to 0 when the 48-64 bit range is exhausted. This property is read-only. |
-| [SessionΓÇïId](/dotnet/api/microsoft.azure.servicebus.message.sessionid) (group-id) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that are not session-aware, this value is ignored. |
-| [Size](/dotnet/api/microsoft.azure.servicebus.message.size) | Reflects the stored size of the message in the broker log as a count of bytes, as it counts towards the storage quota. This property is read-only. |
-| [State](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.state) | Indicates the state of the message in the log. This property is only relevant during message browsing ("peek"), to determine whether a message is "active" and available for retrieval as it reaches the top of the queue, whether it is deferred, or is waiting to be scheduled. This property is read-only. |
-| [TimeΓÇïToΓÇïLive](/dotnet/api/microsoft.azure.servicebus.message.timetolive) | This value is the relative duration after which the message expires, starting from the instant the message has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value cannot be longer than the entity's **DefaultTimeToLive** setting. If it is longer, it is silently adjusted. |
-| [To](/dotnet/api/microsoft.azure.servicebus.message.to) (to) | This property is reserved for future use in routing scenarios and currently ignored by the broker itself. Applications can use this value in rule-driven auto-forward chaining scenarios to indicate the intended logical destination of the message. |
-| [ViaΓÇïPartitionΓÇïKey](/dotnet/api/microsoft.azure.servicebus.message.viapartitionkey) | If a message is sent via a transfer queue in the scope of a transaction, this value selects the transfer queue partition. |
-
-The abstract message model enables a message to be posted to a queue via HTTP (actually always HTTPS) and can be retrieved via AMQP. In either case, the message looks normal in the context of the respective protocol. The broker properties are translated as needed, and the user properties are mapped to the most appropriate location on the respective protocol message model. In HTTP, user properties map directly to and from HTTP headers; in AMQP they map to and from the **application-properties** map.
+| [`ContentType`](/dotnet/api/microsoft.azure.servicebus.message.contenttype) (content-type) | Optionally describes the payload of the message, with a descriptor following the format of RFC2045, Section 5; for example, `application/json`. |
+| [`CorrelationId`](/dotnet/api/microsoft.azure.servicebus.message.correlationid#Microsoft_Azure_ServiceBus_Message_CorrelationId) (correlation-id) | Enables an application to specify a context for the message for the purposes of correlation; for example, reflecting the **MessageId** of a message that is being replied to. |
+| [`DeadLetterSource`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deadlettersource) | Only set in messages that have been dead-lettered and later autoforwarded from the dead-letter queue to another entity. Indicates the entity in which the message was dead-lettered. This property is read-only. |
+| [`DeliveryCount`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) | <p>Number of deliveries that have been attempted for this message. The count is incremented when a message lock expires, or the message is explicitly abandoned by the receiver. This property is read-only.</p> <p>The delivery count isn't incremented when the underlying AMQP connection is closed.</p> |
+| [`EnqueuedSequenceNumber`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedsequencenumber) | For messages that have been autoforwarded, this property reflects the sequence number that had first been assigned to the message at its original point of submission. This property is read-only. |
+| [`EnqueuedTimeUtc`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.enqueuedtimeutc) | The UTC instant at which the message has been accepted and stored in the entity. This value can be used as an authoritative and neutral arrival time indicator when the receiver doesn't want to trust the sender's clock. This property is read-only. |
+| [`ExpiresΓÇïAtUtc`](/dotnet/api/microsoft.azure.servicebus.message.expiresatutc) (absolute-expiry-time) | The UTC instant at which the message is marked for removal and no longer available for retrieval from the entity because of its expiration. Expiry is controlled by the **TimeToLive** property and this property is computed from EnqueuedTimeUtc+TimeToLive. This property is read-only. |
+| [`ForceΓÇïPersistence`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.forcepersistence) | For queues or topics that have the [`EnableExpress`](/dotnet/api/microsoft.servicebus.messaging.queuedescription.enableexpress) flag set, this property can be set to indicate that the message must be persisted to disk before it is acknowledged. This behavior is the standard behavior for all non-express entities. |
+| [`Label`](/dotnet/api/microsoft.azure.servicebus.message.label) (subject) | This property enables the application to indicate the purpose of the message to the receiver in a standardized fashion, similar to an email subject line. |
+| [`LockedΓÇïUntilΓÇïUtc`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.lockeduntilutc) | For messages retrieved under a lock (peek-lock receive mode, not pre-settled) this property reflects the UTC instant until which the message is held locked in the queue/subscription. When the lock expires, the [DeliveryCount](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.deliverycount) is incremented and the message is again available for retrieval. This property is read-only. |
+| [`LockΓÇïToken`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.locktoken) | The lock token is a reference to the lock that is being held by the broker in *peek-lock* receive mode. The token can be used to pin the lock permanently through the [Deferral](message-deferral.md) API and, with that, take the message out of the regular delivery state flow. This property is read-only. |
+| [`MessageΓÇïId`](/dotnet/api/microsoft.azure.servicebus.message.messageid) (message-id) | The message identifier is an application-defined value that uniquely identifies the message and its payload. The identifier is a free-form string and can reflect a GUID or an identifier derived from the application context. If enabled, the [duplicate detection](duplicate-detection.md) feature identifies and removes second and further submissions of messages with the same **MessageId**. |
+| [`PartitionΓÇïKey`](/dotnet/api/microsoft.azure.servicebus.message.partitionkey) | For [partitioned entities](service-bus-partitioning.md), setting this value enables assigning related messages to the same internal partition, so that submission sequence order is correctly recorded. The partition is chosen by a hash function over this value and can't be chosen directly. For session-aware entities, the **SessionId** property overrides this value. |
+| [`ReplyΓÇïTo`](/dotnet/api/microsoft.azure.servicebus.message.replyto) (reply-to) | This optional and application-defined value is a standard way to express a reply path to the receiver of the message. When a sender expects a reply, it sets the value to the absolute or relative path of the queue or topic it expects the reply to be sent to. |
+| [`ReplyΓÇïToΓÇïSessionΓÇïId`](/dotnet/api/microsoft.azure.servicebus.message.replytosessionid) (reply-to-group-id) | This value augments the **ReplyTo** information and specifies which **SessionId** should be set for the reply when sent to the reply entity. |
+| [`ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc`](/dotnet/api/microsoft.azure.servicebus.message.scheduledenqueuetimeutc) | For messages that are only made available for retrieval after a delay, this property defines the UTC instant at which the message will be logically enqueued, sequenced, and therefore made available for retrieval. |
+| [`SequenceΓÇïNumber`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.sequencenumber) | The sequence number is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its true identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers monotonically increase and are gapless. They roll over to 0 when the 48-64 bit range is exhausted. This property is read-only. |
+| [`SessionΓÇïId`](/dotnet/api/microsoft.azure.servicebus.message.sessionid) (group-id) | For session-aware entities, this application-defined value specifies the session affiliation of the message. Messages with the same session identifier are subject to summary locking and enable exact in-order processing and demultiplexing. For entities that are not session-aware, this value is ignored. |
+| [`Size`](/dotnet/api/microsoft.azure.servicebus.message.size) | Reflects the stored size of the message in the broker log as a count of bytes, as it counts towards the storage quota. This property is read-only. |
+| [`State`](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.state) | Indicates the state of the message in the log. This property is only relevant during message browsing ("peek"), to determine whether a message is "active" and available for retrieval as it reaches the top of the queue, whether it is deferred, or is waiting to be scheduled. This property is read-only. |
+| [`TimeΓÇïToΓÇïLive`](/dotnet/api/microsoft.azure.servicebus.message.timetolive) | This value is the relative duration after which the message expires, starting from the instant the message has been accepted and stored by the broker, as captured in **EnqueueTimeUtc**. When not set explicitly, the assumed value is the **DefaultTimeToLive** for the respective queue or topic. A message-level **TimeToLive** value can't be longer than the entity's **DefaultTimeToLive** setting. If it is longer, it is silently adjusted. |
+| [To](/dotnet/api/microsoft.azure.servicebus.message.to) (to) | This property is reserved for future use in routing scenarios and currently ignored by the broker itself. Applications can use this value in rule-driven autoforward chaining scenarios to indicate the intended logical destination of the message. |
+| [`ViaΓÇïPartitionΓÇïKey`](/dotnet/api/microsoft.azure.servicebus.message.viapartitionkey) | If a message is sent via a transfer queue in the scope of a transaction, this value selects the transfer queue partition. |
+
+The abstract message model enables a message to be posted to a queue via HTTPS and can be retrieved via AMQP. In either case, the message looks normal in the context of the respective protocol. The broker properties are translated as needed, and the user properties are mapped to the most appropriate location on the respective protocol message model. In HTTP, user properties map directly to and from HTTP headers; in AMQP they map to and from the **application-properties** map.
## Message routing and correlation
-A subset of the broker properties described previously, specifically [To](/dotnet/api/microsoft.azure.servicebus.message.to), [ReplyTo](/dotnet/api/microsoft.azure.servicebus.message.replyto), [ReplyToSessionId](/dotnet/api/microsoft.azure.servicebus.message.replytosessionid), [MessageId](/dotnet/api/microsoft.azure.servicebus.message.messageid), [CorrelationId](/dotnet/api/microsoft.azure.servicebus.message.correlationid), and [SessionId](/dotnet/api/microsoft.azure.servicebus.message.sessionid), are used to help applications route messages to particular destinations. To illustrate this, consider a few patterns:
+A subset of the broker properties described previously, specifically [To](/dotnet/api/microsoft.azure.servicebus.message.to), [ReplyTo](/dotnet/api/microsoft.azure.servicebus.message.replyto), [ReplyToSessionId](/dotnet/api/microsoft.azure.servicebus.message.replytosessionid), [MessageId](/dotnet/api/microsoft.azure.servicebus.message.messageid), [CorrelationId](/dotnet/api/microsoft.azure.servicebus.message.correlationid), and [SessionId](/dotnet/api/microsoft.azure.servicebus.message.sessionid), are used to help applications route messages to particular destinations. To illustrate this feature, consider a few patterns:
- **Simple request/reply**: A publisher sends a message into a queue and expects a reply from the message consumer. To receive the reply, the publisher owns a queue into which it expects replies to be delivered. The address of that queue is expressed in the **ReplyTo** property of the outbound message. When the consumer responds, it copies the **MessageId** of the handled message into the **CorrelationId** property of the reply message and delivers the message to the destination indicated by the **ReplyTo** property. One message can yield multiple replies, depending on the application context. - **Multicast request/reply**: As a variation of the prior pattern, a publisher sends the message into a topic and multiple subscribers become eligible to consume the message. Each of the subscribers might respond in the fashion described previously. This pattern is used in discovery or roll-call scenarios and the respondent typically identifies itself with a user property or inside the payload. If **ReplyTo** points to a topic, such a set of discovery responses can be distributed to an audience. - **Multiplexing**: This session feature enables multiplexing of streams of related messages through a single queue or subscription such that each session (or group) of related messages, identified by matching **SessionId** values, are routed to a specific receiver while the receiver holds the session under lock. Read more about the details of sessions [here](message-sessions.md).-- **Multiplexed request/reply**: This session feature enables multiplexed replies, allowing several publishers to share a reply queue. By setting **ReplyToSessionId**, the publisher can instruct the consumer(s) to copy that value into the **SessionId** property of the reply message. The publishing queue or topic does not need to be session-aware. As the message is sent, the publisher can then specifically wait for a session with the given **SessionId** to materialize on the queue by conditionally accepting a session receiver.
+- **Multiplexed request/reply**: This session feature enables multiplexed replies, allowing several publishers to share a reply queue. By setting **ReplyToSessionId**, the publisher can instruct the consumer(s) to copy that value into the **SessionId** property of the reply message. The publishing queue or topic doesn't need to be session-aware. As the message is sent, the publisher can then specifically wait for a session with the given **SessionId** to materialize on the queue by conditionally accepting a session receiver.
-Routing inside of a Service Bus namespace can be realized using auto-forward chaining and topic subscription rules. Routing across namespaces can be realized [using Azure LogicApps](https://azure.microsoft.com/services/logic-apps/). As indicated in the previous list, the **To** property is reserved for future use and may eventually be interpreted by the broker with a specially enabled feature. Applications that wish to implement routing should do so based on user properties and not lean on the **To** property; however, doing so now will not cause compatibility issues.
+Routing inside of a Service Bus namespace can be realized using autoforward chaining and topic subscription rules. Routing across namespaces can be realized [using Azure LogicApps](https://azure.microsoft.com/services/logic-apps/). As indicated in the previous list, the **To** property is reserved for future use and may eventually be interpreted by the broker with a specially enabled feature. Applications that wish to implement routing should do so based on user properties and not lean on the **To** property; however, doing so now won't cause compatibility issues.
## Payload serialization
When in transit or stored inside of Service Bus, the payload is always an opaque
Unlike the Java or .NET Standard variants, the .NET Framework version of the Service Bus API supports creating **BrokeredMessage** instances by passing arbitrary .NET objects into the constructor.
-When using the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. When using the AMQP protocol, the object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of **ArrayList** and **IDictionary<string,object>** objects, and any AMQP client can decode them.
+When using the legacy SBMP protocol, those objects are then serialized with the default binary serializer, or with a serializer that is externally supplied. When using the AMQP protocol, the object is serialized into an AMQP object. The receiver can retrieve those objects with the [GetBody\<T>()](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage.getbody#Microsoft_ServiceBus_Messaging_BrokeredMessage_GetBody__1) method, supplying the expected type. With AMQP, the objects are serialized into an AMQP graph of `ArrayList` and `IDictionary<string,object>` objects, and any AMQP client can decode them.
-While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. It should also be noted that while AMQP has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
+While this hidden serialization magic is convenient, applications should take explicit control of object serialization and turn their object graphs into streams before including them into a message, and do the reverse on the receiver side. This yields interoperable results. While AMQP has a powerful binary encoding model, it is tied to the AMQP messaging ecosystem and HTTP clients will have trouble decoding such payloads.
The .NET Standard and Java API variants only accept byte arrays, which means that the application must handle object serialization control.
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
You can specify two different modes in which Service Bus receives messages.
If the application crashes after it processes the message, but before it requests the Service Bus service to complete the message, Service Bus redelivers the message to the application when it restarts. This process is often called **at-least once** processing. That is, each message is processed at least once. However, in certain situations the same message may be redelivered. If your scenario can't tolerate duplicate processing, add additional logic in your application to detect duplicates. For more information, see [Duplicate detection](duplicate-detection.md). This feature is known as **exactly once** processing.
+ > [!NOTE]
+ > For more information about these two modes, see [Settling receive operations](message-transfers-locks-settlement.md#settling-receive-operations).
+ ## Topics and subscriptions A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a **publish and subscribe** pattern. It's useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions. The subscriptions can use additional filters to restrict the messages that they want to receive. Publishers send messages to a topic in the same way that they send messages to a queue. But, consumers don't receive messages directly from the topic. Instead, consumers receive messages from subscriptions of the topic. A topic subscription resembles a virtual queue that receives copies of the messages that are sent to the topic. Consumers receive messages from a subscription identically to the way they receive messages from a queue.
spring-cloud Spring Cloud Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/spring-cloud-quotas.md
All Azure services set default limits and quotas for resources and features. A
## Azure Spring Cloud service tiers and limits
-| Resource | Basic | Standard
+| Resource | Scope | Basic | Standard
- | - | -
-vCPU | 1 per service instance | 4 per service instance
-Memory | 2 GB per service instance | 8 GB per service instance
-Azure Spring Cloud service instances per region per subscription | 10 | 10
-Total app instances per Azure Spring Cloud service instance | 25 | 500
-Persistent volumes | 1 GB/app x 10 apps | 50 GB/app x 10 apps
+vCPU | per app instance | 1 | 4
+Memory | per app instance | 2 GB | 8 GB
+Azure Spring Cloud service instances | per region per subscription | 10 | 10
+Total app instances | per Azure Spring Cloud service instance | 25 | 500
+Custom Domains | per Azure Spring Cloud service instance | 0 | 25
+Persistent volumes | per Azure Spring Cloud service instance | 1 GB/app x 10 apps | 50 GB/app x 10 apps
+
+> [!TIP]
+> Rates listed for Total app instances per service instance apply for apps/deployments in stopped state. Please delete apps/deployments not in use.
## Next steps
-Some default limits can be increased. If your setup requires an increase, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+Some default limits can be increased. If your setup requires an increase, [create a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-change-feed.md
For a description of each property, see [Azure Event Grid event schema for Blob
- Change event records where the `eventType` has a value of `Control` are internal system records and don't reflect a change to objects in your account. You can safely ignore those records. -- Values in the `storageDiagnonstics` property bag are for internal use only and not designed for use by your application. Your applications shouldn't have a contractual dependency on that data. You can safely ignore those properties.
+- Values in the `storageDiagnostics` property bag are for internal use only and not designed for use by your application. Your applications shouldn't have a contractual dependency on that data. You can safely ignore those properties.
- The time represented by the segment is **approximate** with bounds of 15 minutes. So to ensure consumption of all records within a specified time, consume the consecutive previous and next hour segment.
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
The following example grants read access to a user-defined schema.
GRANT SELECT ON SCHEMA::Test to ApplicationUser ```
-Managing databases and servers from the Azure portal or using the Azure Resource Manager API is controlled by your portal user account's role assignments. For more information, see [Add or remove Azure role assignments using the Azure portal](../../role-based-access-control/role-assignments-portal.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
+Managing databases and servers from the Azure portal or using the Azure Resource Manager API is controlled by your portal user account's role assignments. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
## Encryption
time-series-insights Tutorial Create Populate Tsi Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/tutorial-create-populate-tsi-environment.md
This tutorial guides you through the process of creating an Azure Time Series In
## Prerequisites
-* Your Azure sign-in account also must be a member of the subscription's **Owner** role. For more information, read [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+* Your Azure sign-in account also must be a member of the subscription's **Owner** role. For more information, read [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
## Review video
time-series-insights Tutorials Set Up Tsi Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/tutorials-set-up-tsi-environment.md
Sign up for a [free Azure subscription](https://azure.microsoft.com/free/) if yo
## Prerequisites
-* At minimum, you must have the **Contributor** role for the Azure subscription. For more information, read [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+* At minimum, you must have the **Contributor** role for the Azure subscription. For more information, read [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
## Create a device simulation
traffic-manager Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/traffic-manager/security-baseline.md
Alternatively, you can enable and on-board data to Azure Sentinel.
In Resource Manager, endpoints from any subscription can be added to Traffic Manager, as long as the person configuring the Traffic Manager profile has read access to the endpoint. -- [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0)
In Resource Manager, endpoints from any subscription can be added to Traffic Man
Azure Traffic Manager has a predefined Azure role called "Traffic Manager Contributor", which can be assigned to users. -- [Add or remove Azure role assignments using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
- [Traffic Manager Contributor role](../role-based-access-control/built-in-roles.md#traffic-manager-contributor)
virtual-desktop Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/security-baseline.md
Additionally, use built-in roles to allocate permissions and only create custom
- [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) -- [How to configure RBAC in Azure](../role-based-access-control/role-assignments-portal.md)
+- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md)
- [How to use Azure AD identity and access reviews](../active-directory/governance/access-reviews-overview.md)
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/set-up-customize-master-image.md
This article tells you how to prepare a master virtual hard disk (VHD) image for upload to Azure, including how to create virtual machines (VMs) and install software on them. These instructions are for a Windows Virtual Desktop-specific configuration that can be used with your organization's existing processes. >[!IMPORTANT]
->We recommend you use an image from the Azure Image Gallery. However, if you do need to use a customized image, make sure you don't already have the WIndows Virtual Desktop Agent installed on your device. Using a customized image with the Windows Virtual Desktop Agent can cause problems with the image.
+>We recommend you use an image from the Azure Image Gallery. However, if you do need to use a customized image, make sure you don't already have the Windows Virtual Desktop Agent installed on your VM. Using a customized image with the Windows Virtual Desktop Agent can cause problems with the image, such as blocking registration and preventing user session connections.
## Create a VM
Now that you have an image, you can create or update host pools. To learn more a
- [Tutorial: Create a host pool with Azure Marketplace](create-host-pools-azure-marketplace.md) - [Create a host pool with PowerShell](create-host-pools-powershell.md) - [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md)-- [Configure the Windows Virtual Desktop load-balancing method](configure-host-pool-load-balancing.md)
+- [Configure the Windows Virtual Desktop load-balancing method](configure-host-pool-load-balancing.md)
+
+If you encountered a connectivity problem after preparing or customizing your VHD image, check out the [troubleshooting guide](troubleshoot-agent.md#your-issue-isnt-listed-here-or-wasnt-resolved) for help.
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-agent.md
If you can't find your issue in this article or the instructions didn't help you
- You're not seeing your VMs show up in the session hosts list - You don't see the **Remote Desktop Agent Loader** in the Services window - You don't see the **RdAgentBootLoader** component in the Task Manager
+- You're receiving a **Connection Broker couldn't validate the settings** error on custom image VMs
- The instructions in this article didn't resolve your issue ### Step 1: Uninstall all agent, boot loader, and stack component programs
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/shared-image-galleries.md
Shared Image Gallery is a service that helps you build structure and organizatio
- Versioning and grouping of images for easier management. - Highly available images with Zone Redundant Storage (ZRS) accounts in regions that support Availability Zones. ZRS offers better resilience against zonal failures. - Premium storage support (Premium_LRS).-- Sharing across subscriptions, and even between Active Directory (AD) tenants, using RBAC.
+- Sharing across subscriptions, and even between Active Directory (AD) tenants, using Azure RBAC.
- Scaling your deployments with image replicas in each region. Using a Shared Image Gallery you can share your images to different users, service principals, or AD groups within your organization. Shared images can be replicated to multiple regions, for quicker scaling of your deployments.
The regions a Shared Image version is replicated to can be updated after creatio
## Access
-As the Shared Image Gallery, Image Definition, and Image version are all resources, they can be shared using the built-in native Azure RBAC controls. Using RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the Shared Image version, they can deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
+As the Shared Image Gallery, Image Definition, and Image version are all resources, they can be shared using the built-in native Azure RBAC controls. Using Azure RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the Shared Image version, they can deploy a VM or a Virtual Machine Scale Set. Here is the sharing matrix that helps understand what the user gets access to:
| Shared with User | Shared Image Gallery | Image Definition | Image version | |-|-|--|-| | Shared Image Gallery | Yes | Yes | Yes | | Image Definition | No | Yes | Yes |
-We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about RBAC, see [Manage access to Azure resources using RBAC](../role-based-access-control/role-assignments-portal.md).
+We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual image versions. For more information about Azure RBAC, see [Assign Azure roles](../role-based-access-control/role-assignments-portal.md).
Images can also be shared, at scale, even across tenants using a multi-tenant app registration. For more information about sharing images across tenants, see "Share gallery VM images across Azure tenants" using the [Azure CLI](./linux/share-images-across-tenants.md) or [PowerShell](./windows/share-images-across-tenants.md).